1 Introduction

This script is structured as follows:

  • setup: loading packages and data
  • overview: displaying values such as N of participants, items, etc.
  • preprocessing: data wrangling steps
  • descriptive statistics: summary stats for said patterns
  • data visualization: visualizing the main data patterns we want to report
  • data visualization: covariates: plots for each covariate in relationship with IOS

Main bits necessary to understand the following analysis:

  • the column condition differentiates interactive versus non-interactive
  • the column category differentiates abstract versus concrete
  • the two main dependent variables are closeness and IOS (self-other inclusion)

The following baseline analyses will be performed:

  • assess correlations between covariates
  • assess the effect of condition on IOS
  • assess the effect of condition on closeness
  • assess the effect of all covariates on IOS
  • assess the effect of all covariates on closeness

The following main analyses will be performed:

  • assess the interaction of category and other_inclusion on IOS
  • assess the interaction of category and other_inclusion on closeness

Then, we will look additionally also at difficulty:

  • assess the effect of category on difficulty
  • perform both main analyses again, this time controlling for difficulty

For all brms model fits, corresponding code chunks are set to eval = FALSE with pre-compiled models saved in models folder that are loaded into this script to save the user time.

2 Setup

Load packages:

library(tidyverse) # for data processing
library(brms) # for Bayesian mixed models
library(tidybayes) # for half-eye plots of posteriors
library(ggridges) # for joy plots
library(patchwork) # for multiplot arrays
library(gridExtra) # for multiplot arrays when patchwork fails us
library(plotly) # for scatterplot matrix
library(GGally) # for scatterplot matrix

Load data:

# Load data:

df <- read_csv('../data/abstract_concepts_conversations_all.csv')

# Show some random rows:

sample_n(df, 5)
## # A tibble: 5 × 14
##   participant word   condition category closeness type_D pleasantness commitment
##         <dbl> <chr>  <chr>     <chr>        <dbl> <chr>         <dbl>      <dbl>
## 1          22 Sfida  No_inter… Abstract      50.9 passi…         93.4       95.6
## 2           9 Miste… Interact… Abstract      75.9 active         77         84.6
## 3           2 Corona No_inter… Concrete      72.4 passi…         99.2       97.4
## 4          18 Miste… No_inter… Abstract      79.8 passi…         68.6       70.7
## 5          27 Stazi… No_inter… Concrete      83.4 passi…         96.6       99.8
## # ℹ 6 more variables: intimacy <dbl>, difficulty <dbl>,
## #   self_contribution <dbl>, other_contribution <dbl>, IOS <dbl>,
## #   data_exclusion <chr>

3 Reproducibility and reporting

For reporting, get R version:

R.Version()$version.string
## [1] "R version 4.3.0 (2023-04-21)"

For reporting, get package versions:

packageVersion('tidyverse') # for data processing
## [1] '2.0.0'
packageVersion('brms') # for Bayesian mixed models
## [1] '2.19.0'
packageVersion('tidybayes') # for half-eye plots of posteriors
## [1] '3.0.4'
packageVersion('ggridges') # for joy plots
## [1] '0.5.4'
packageVersion('patchwork') # for multiplot arrays
## [1] '1.1.2'
packageVersion('gridExtra') # for multiplot arrays when patchwork fails us
## [1] '2.3'
packageVersion('plotly') # for scatterplot matrix
## [1] '4.10.2'
packageVersion('GGally') # for scatterplot matrix
## [1] '2.1.2'

For reporting, get citations versions:

citation('tidyverse') # for data processing
## To cite package 'tidyverse' in publications use:
## 
##   Wickham H, Averick M, Bryan J, Chang W, McGowan LD, François R,
##   Grolemund G, Hayes A, Henry L, Hester J, Kuhn M, Pedersen TL, Miller
##   E, Bache SM, Müller K, Ooms J, Robinson D, Seidel DP, Spinu V,
##   Takahashi K, Vaughan D, Wilke C, Woo K, Yutani H (2019). "Welcome to
##   the tidyverse." _Journal of Open Source Software_, *4*(43), 1686.
##   doi:10.21105/joss.01686 <https://doi.org/10.21105/joss.01686>.
## 
## A BibTeX entry for LaTeX users is
## 
##   @Article{,
##     title = {Welcome to the {tidyverse}},
##     author = {Hadley Wickham and Mara Averick and Jennifer Bryan and Winston Chang and Lucy D'Agostino McGowan and Romain François and Garrett Grolemund and Alex Hayes and Lionel Henry and Jim Hester and Max Kuhn and Thomas Lin Pedersen and Evan Miller and Stephan Milton Bache and Kirill Müller and Jeroen Ooms and David Robinson and Dana Paige Seidel and Vitalie Spinu and Kohske Takahashi and Davis Vaughan and Claus Wilke and Kara Woo and Hiroaki Yutani},
##     year = {2019},
##     journal = {Journal of Open Source Software},
##     volume = {4},
##     number = {43},
##     pages = {1686},
##     doi = {10.21105/joss.01686},
##   }
citation('brms') # for Bayesian mixed models
## To cite brms in publications use:
## 
##   Paul-Christian Bürkner (2017). brms: An R Package for Bayesian
##   Multilevel Models Using Stan. Journal of Statistical Software, 80(1),
##   1-28. doi:10.18637/jss.v080.i01
## 
## Paul-Christian Bürkner (2018). Advanced Bayesian Multilevel Modeling
## with the R Package brms. The R Journal, 10(1), 395-411.
## doi:10.32614/RJ-2018-017
## 
## Paul-Christian Bürkner (2021). Bayesian Item Response Modeling in R
## with brms and Stan. Journal of Statistical Software, 100(5), 1-54.
## doi:10.18637/jss.v100.i05
## 
## To see these entries in BibTeX format, use 'print(<citation>,
## bibtex=TRUE)', 'toBibtex(.)', or set
## 'options(citation.bibtex.max=999)'.
citation('tidybayes') # for half-eye plots of posteriors
## To cite package 'tidybayes' in publications use:
## 
##   Kay M (2023). _tidybayes: Tidy Data and Geoms for Bayesian Models_.
##   doi:10.5281/zenodo.1308151 <https://doi.org/10.5281/zenodo.1308151>,
##   R package version 3.0.4, <http://mjskay.github.io/tidybayes/>.
## 
## A BibTeX entry for LaTeX users is
## 
##   @Manual{,
##     title = {{tidybayes}: Tidy Data and Geoms for {Bayesian} Models},
##     author = {Matthew Kay},
##     year = {2023},
##     note = {R package version 3.0.4},
##     url = {http://mjskay.github.io/tidybayes/},
##     doi = {10.5281/zenodo.1308151},
##   }
citation('ggridges') # for joy plots
## To cite package 'ggridges' in publications use:
## 
##   Wilke C (2022). _ggridges: Ridgeline Plots in 'ggplot2'_. R package
##   version 0.5.4, <https://CRAN.R-project.org/package=ggridges>.
## 
## A BibTeX entry for LaTeX users is
## 
##   @Manual{,
##     title = {ggridges: Ridgeline Plots in 'ggplot2'},
##     author = {Claus O. Wilke},
##     year = {2022},
##     note = {R package version 0.5.4},
##     url = {https://CRAN.R-project.org/package=ggridges},
##   }
citation('patchwork') # for multiplot arrays
## To cite package 'patchwork' in publications use:
## 
##   Pedersen T (2022). _patchwork: The Composer of Plots_. R package
##   version 1.1.2, <https://CRAN.R-project.org/package=patchwork>.
## 
## A BibTeX entry for LaTeX users is
## 
##   @Manual{,
##     title = {patchwork: The Composer of Plots},
##     author = {Thomas Lin Pedersen},
##     year = {2022},
##     note = {R package version 1.1.2},
##     url = {https://CRAN.R-project.org/package=patchwork},
##   }
citation('gridExtra') # for multiplot arrays when patchwork fails us
## To cite package 'gridExtra' in publications use:
## 
##   Auguie B (2017). _gridExtra: Miscellaneous Functions for "Grid"
##   Graphics_. R package version 2.3,
##   <https://CRAN.R-project.org/package=gridExtra>.
## 
## A BibTeX entry for LaTeX users is
## 
##   @Manual{,
##     title = {gridExtra: Miscellaneous Functions for "Grid" Graphics},
##     author = {Baptiste Auguie},
##     year = {2017},
##     note = {R package version 2.3},
##     url = {https://CRAN.R-project.org/package=gridExtra},
##   }
citation('plotly') # for scatterplot matrix
## To cite package 'plotly' in publications use:
## 
##   C. Sievert. Interactive Web-Based Data Visualization with R, plotly,
##   and shiny. Chapman and Hall/CRC Florida, 2020.
## 
## A BibTeX entry for LaTeX users is
## 
##   @Book{,
##     author = {Carson Sievert},
##     title = {Interactive Web-Based Data Visualization with R, plotly, and shiny},
##     publisher = {Chapman and Hall/CRC},
##     year = {2020},
##     isbn = {9781138331457},
##     url = {https://plotly-r.com},
##   }
citation('GGally') # for scatterplot matrix
## To cite package 'GGally' in publications use:
## 
##   Schloerke B, Cook D, Larmarange J, Briatte F, Marbach M, Thoen E,
##   Elberg A, Crowley J (2021). _GGally: Extension to 'ggplot2'_. R
##   package version 2.1.2, <https://CRAN.R-project.org/package=GGally>.
## 
## A BibTeX entry for LaTeX users is
## 
##   @Manual{,
##     title = {GGally: Extension to 'ggplot2'},
##     author = {Barret Schloerke and Di Cook and Joseph Larmarange and Francois Briatte and Moritz Marbach and Edwin Thoen and Amos Elberg and Jason Crowley},
##     year = {2021},
##     note = {R package version 2.1.2},
##     url = {https://CRAN.R-project.org/package=GGally},
##   }

4 Data cleaning and processing

How many participants will be excluded?

filter(df, data_exclusion == 'yes') %>% count(participant)
## # A tibble: 8 × 2
##   participant     n
##         <dbl> <int>
## 1           4     8
## 2           7    16
## 3          11     8
## 4          42     8
## 5          54     8
## 6          56     8
## 7          59    16
## 8          65     8
# Sum of data points:

filter(df, data_exclusion == 'yes') %>%
  count(participant) %>% 
  summarize(n = sum(n))
## # A tibble: 1 × 1
##       n
##   <int>
## 1    80

8 participants. How much data is this of the total?

nrow(filter(df, data_exclusion == 'yes')) / nrow(df)
## [1] 0.07352941

Exclude participants for which the data_exclusion column is equal to yes, or conversely, only take those for which it is no:

df <- filter(df,
             data_exclusion == 'no')

For the main model later, we want to look at how category influences IOS when interacting with other_contribution. For this we need to center other_contribution first. We’ll do this here already so that subsets of this data also contain the centered covariate other_contribution_c. We’ll later also use difficulty as a control covariate, so let’s center that as well just in case.

# For IOS analysis:

df <- mutate(df,
             other_contribution_c = other_contribution - mean(other_contribution,
                                                              na.rm = TRUE),
             self_contribution_c = self_contribution - mean(self_contribution,
                                                              na.rm = TRUE),
             difficulty_c = difficulty - mean(difficulty,
                                              na.rm = TRUE))

Clean the content of the condition column so as to also make the labels ready for plotting (no obscure characters etc.). We’ll also hand-convert to factor so that not interactive will come before interactive in all plots and outputs, which is more intuitive.

df <- mutate(df,
             condition = ifelse(condition == 'No_interactive',
                                'non-interactive', 'interactive'),
             condition = factor(condition,
                                levels = c('non-interactive', 'interactive')))

Also change the content of the category column so that the labels say abstract and concrete, and make the latter come first, which is more intuitive:

df <- mutate(df,
             category = str_to_lower(category),
             category = factor(category,
                               levels = c('concrete', 'abstract')))

The closeness dependent measure is a VAS scale between 0 and 100. This needs to be scaled to [0, 1] for the beta distribution.

df <- mutate(df, closeness_01 = closeness / 100)

Check correlation between closeness values from active and passive in the type_D column. An easy way of doing this is to split that column into two tibbles, one for each type. But then we absolutely have to make sure that it’s active -> passive all throughout the tibble, always in that order, i.e., the rows still need to match for the correlation analysis. To assess that rows match, I’ll create a unique identifier variable by pasting the participant and word columns together:

# Create unique identifier to double check that values are same after splitting:

df <- mutate(df, unique_id = str_c(participant, '_', word))

# Separate active and passive:

active <- filter(df, type_D == 'active')
passive <- filter(df, type_D == 'passive')

# Double-check that they are the same (i.e., rows aren't mixed up):

all(active$unique_id == passive$unique_id)
## [1] TRUE

Now we can perform the correlation. Let’s do a quick-and-dirty frequentist Pearson correlation here:

cor.test(active$closeness, passive$closeness, method = 'spearman')
## Warning in cor.test.default(active$closeness, passive$closeness, method =
## "spearman"): Cannot compute exact p-value with ties
## 
##  Spearman's rank correlation rho
## 
## data:  active$closeness and passive$closeness
## S = 1753345, p-value < 2.2e-16
## alternative hypothesis: true rho is not equal to 0
## sample estimates:
##       rho 
## 0.9178271

Very very high correlation. I’d say that for the main closeness analysis, we’d probably average over the two respective type_D values so as to be extra sure not to incur any ‘independence violation’, or however we might want to call the Bayesian equivalent of that. We’ll also make sure that we average these two values for plotting so that each of the closeness values in the graphs below is actually just one trial.

Make tibble with averages across active and passive. We also need to re-scale the resulting means for this tibble, the new dist_df.

dist_df <- df %>% 
  group_by(participant, word, category, condition,
           pleasantness, commitment, intimacy,
           difficulty, self_contribution, other_contribution) %>% 
  summarize(closeness = mean(closeness)) %>% 
  mutate(closeness_01 = closeness / 100)
## `summarise()` has grouped output by 'participant', 'word', 'category',
## 'condition', 'pleasantness', 'commitment', 'intimacy', 'difficulty',
## 'self_contribution'. You can override using the `.groups` argument.

For all other analyses then, we will just get the active one (it doesn’t matter, could’ve just as well taken the other). This way we make sure that we don’t accidentally duplicate the values:

df <- filter(df, type_D == 'active')

Get a subset with just the interactive condition, which we’ll need for all the covariates that are only attested in this condition (difficulty, self_contribution, other_contribution).

inter_df <- filter(df, condition == 'interactive')

Create another version from this, a tibble that only contains the interactive condition and that also averages across active and passive:

dist_inter_df <- inter_df %>% 
  group_by(participant, word, condition, category,
           other_contribution, other_contribution_c,
           self_contribution, self_contribution_c,
           difficulty, difficulty_c) %>% 
  summarize(closeness = mean(closeness)) %>% 
  mutate(closeness_01 = closeness / 100)
## `summarise()` has grouped output by 'participant', 'word', 'condition',
## 'category', 'other_contribution', 'other_contribution_c', 'self_contribution',
## 'self_contribution_c', 'difficulty'. You can override using the `.groups`
## argument.

5 Overview

Check participants prior to exclusion:

# Count rows per participants:

df %>% 
  count(participant)
## # A tibble: 66 × 2
##    participant     n
##          <dbl> <int>
##  1           1     8
##  2           2     8
##  3           3     8
##  4           4     4
##  5           5     8
##  6           6     8
##  7           8     8
##  8           9     8
##  9          10     8
## 10          11     4
## # ℹ 56 more rows
# Count rows of this table to print number of participants into console:

df %>% 
  count(participant) %>% 
  nrow()
## [1] 66

66 participants.

Let’s do the same for words (= items):

# Count rows per participants:

df %>% 
  count(word)
## # A tibble: 16 × 2
##    word          n
##    <chr>     <int>
##  1 Canoa        30
##  2 Corona       34
##  3 Cristallo    30
##  4 Destino      30
##  5 Diamante     34
##  6 Enigma       30
##  7 Futuro       28
##  8 Ideale       28
##  9 Libertà      34
## 10 Libreria     28
## 11 Logica       34
## 12 Mistero      34
## 13 Muscolo      28
## 14 Sfida        34
## 15 Stazione     34
## 16 Sveglia      34
# Count rows of this table to print number of participants into console:

df %>% 
  count(word) %>% 
  nrow()
## [1] 16

16 items.

What are these words?

distinct(df, category, word) %>% 
  arrange(category)
## # A tibble: 16 × 2
##    category word     
##    <fct>    <chr>    
##  1 concrete Diamante 
##  2 concrete Sveglia  
##  3 concrete Corona   
##  4 concrete Stazione 
##  5 concrete Canoa    
##  6 concrete Cristallo
##  7 concrete Libreria 
##  8 concrete Muscolo  
##  9 abstract Libertà  
## 10 abstract Logica   
## 11 abstract Mistero  
## 12 abstract Sfida    
## 13 abstract Enigma   
## 14 abstract Destino  
## 15 abstract Futuro   
## 16 abstract Ideale

We will have to decide whether items need condition random slopes. For this, it’s crucial to know whether this is within- or between- items?

df %>% 
  count(word, condition)
## # A tibble: 32 × 3
##    word      condition           n
##    <chr>     <fct>           <int>
##  1 Canoa     non-interactive    16
##  2 Canoa     interactive        14
##  3 Corona    non-interactive    18
##  4 Corona    interactive        16
##  5 Cristallo non-interactive    16
##  6 Cristallo interactive        14
##  7 Destino   non-interactive    16
##  8 Destino   interactive        14
##  9 Diamante  non-interactive    15
## 10 Diamante  interactive        19
## # ℹ 22 more rows

Within items, so by-item random slopes are possible for the condition effect and will be fitted in all models below if they feature the predictor condition.

Check how condition and category behave with respect to each other:

df %>% 
  count(condition, category)
## # A tibble: 4 × 3
##   condition       category     n
##   <fct>           <fct>    <int>
## 1 non-interactive concrete   128
## 2 non-interactive abstract   128
## 3 interactive     concrete   124
## 4 interactive     abstract   124

Looks nicely balanced.

6 Descriptive statistics

Calculate closeness as a function of condition (using the data where active and passive have been averaged into one data point):

dist_df %>% 
  group_by(condition) %>% 
  summarize(M = mean(closeness))
## # A tibble: 2 × 2
##   condition           M
##   <fct>           <dbl>
## 1 non-interactive  65.5
## 2 interactive      80.0

Calculate IOS as a function of condition:

df %>% 
  group_by(condition) %>% 
  summarize(M = mean(IOS))
## # A tibble: 2 × 2
##   condition           M
##   <fct>           <dbl>
## 1 non-interactive  1.95
## 2 interactive      3.37

Calculate closeness as a function of category now:

dist_df %>% 
  group_by(category) %>% 
  summarize(M = mean(closeness))
## # A tibble: 2 × 2
##   category     M
##   <fct>    <dbl>
## 1 concrete  72.7
## 2 abstract  72.6

Calculate IOS as a function of category:

df %>% 
  group_by(category) %>% 
  summarize(M = mean(IOS))
## # A tibble: 2 × 2
##   category     M
##   <fct>    <dbl>
## 1 concrete  2.62
## 2 abstract  2.67

Raw descriptive correlation between covariates:

# Extract covariates into one object:

covs <- inter_df %>% 
  select(pleasantness:other_contribution)

# Perform pairwise correlations for all covariates:

cor_tab <- round(cor(covs), 2)

# Show:

cor_tab
##                    pleasantness commitment intimacy difficulty
## pleasantness               1.00       0.48     0.47      -0.29
## commitment                 0.48       1.00     0.45       0.00
## intimacy                   0.47       0.45     1.00      -0.14
## difficulty                -0.29       0.00    -0.14       1.00
## self_contribution          0.43       0.65     0.47       0.01
## other_contribution         0.57       0.47     0.41       0.01
##                    self_contribution other_contribution
## pleasantness                    0.43               0.57
## commitment                      0.65               0.47
## intimacy                        0.47               0.41
## difficulty                      0.01               0.01
## self_contribution               1.00               0.67
## other_contribution              0.67               1.00
# Save:

cor_tab %>% 
  as.data.frame() %>%
  rownames_to_column(var = 'variable') %>% 
  write_csv('../results_tables/E1_covariate_correlations.csv')

The highest correlations are between self_contribution and commitment. Also relatively high is pleasantness and other_contribution. The pleasantness variable seems to moderately correlate with almost every other variable, including commitment (positive), intimacy (positive), self_contribution (positive), and difficulty (negative).

For the description of IOS as a function of the different covariates, I think it would be most intuitive to talk about the means of the highest and lowest scale points, as well as perhaps report Spearman’s rho. Let’s do that for each covariate in turn.

Pleasantness:

df %>% 
  group_by(IOS) %>% 
  summarize(M = mean(pleasantness))
## # A tibble: 6 × 2
##     IOS     M
##   <dbl> <dbl>
## 1     1  49.2
## 2     2  71.7
## 3     3  74.8
## 4     4  77.2
## 5     5  84.1
## 6     6  88.0
with(df, cor(IOS, pleasantness, method = 'spearman'))
## [1] 0.3723928

Commitment:

df %>% 
  group_by(IOS) %>% 
  summarize(M = mean(commitment))
## # A tibble: 6 × 2
##     IOS     M
##   <dbl> <dbl>
## 1     1  71.0
## 2     2  79.4
## 3     3  79.4
## 4     4  79.9
## 5     5  78.1
## 6     6  87.2
with(df, cor(IOS, commitment, method = 'spearman'))
## [1] 0.1539133

Intimacy:

df %>% 
  group_by(IOS) %>% 
  summarize(M = mean(intimacy))
## # A tibble: 6 × 2
##     IOS     M
##   <dbl> <dbl>
## 1     1  59.3
## 2     2  71.0
## 3     3  69.3
## 4     4  70.8
## 5     5  74.8
## 6     6  84.4
with(df, cor(IOS, intimacy, method = 'spearman'))
## [1] 0.1492464

Difficulty:

df %>% 
  group_by(IOS) %>% 
  summarize(M = mean(difficulty, na.rm = TRUE))
## # A tibble: 6 × 2
##     IOS     M
##   <dbl> <dbl>
## 1     1 68.4 
## 2     2 36.1 
## 3     3 37.8 
## 4     4 32.6 
## 5     5 31.3 
## 6     6  3.98
with(df, cor(IOS, difficulty,
             method = 'spearman',
             use = 'complete.obs'))
## [1] -0.1687921

Self-contribution:

df %>% 
  group_by(IOS) %>% 
  summarize(M = mean(self_contribution, na.rm = TRUE))
## # A tibble: 6 × 2
##     IOS     M
##   <dbl> <dbl>
## 1     1  69.3
## 2     2  79.0
## 3     3  74.5
## 4     4  74.4
## 5     5  77.1
## 6     6  86.1
with(df, cor(IOS, self_contribution,
             method = 'spearman',
             use = 'complete.obs'))
## [1] 0.0003549226

Other-contribution:

df %>% 
  group_by(IOS) %>% 
  summarize(M = mean(other_contribution, na.rm = TRUE))
## # A tibble: 6 × 2
##     IOS     M
##   <dbl> <dbl>
## 1     1  66.6
## 2     2  76.8
## 3     3  74.5
## 4     4  73.5
## 5     5  80.7
## 6     6  83.0
with(df, cor(IOS, other_contribution,
             method = 'spearman',
             use = 'complete.obs'))
## [1] 0.05171539

For presentation purposes only, dichotomize the other_contribution variable, and order it in such a way that low other comes before high other. This will be used in the median_split_p code chunk below as well.

# Calculate median:

md <- median(inter_df$other_contribution)

# Create median split variable:

inter_df <- mutate(inter_df,
                   other_cat = ifelse(other_contribution > md,
                                      'high other contribution',
                                      'low other contribution'),
                   other_cat = factor(other_cat,
                                      levels = c('low other contribution',
                                                 'high other contribution')))

Let’s look at the average IOS for abstract and concrete as a function of the median split:

inter_df %>% 
  group_by(other_cat, category) %>% 
  summarize(M_IOS = mean(IOS),
            M_dist = mean(closeness))
## `summarise()` has grouped output by 'other_cat'. You can override using the
## `.groups` argument.
## # A tibble: 4 × 4
## # Groups:   other_cat [2]
##   other_cat               category M_IOS M_dist
##   <fct>                   <fct>    <dbl>  <dbl>
## 1 low other contribution  concrete  3.38   78.2
## 2 low other contribution  abstract  3.22   75.0
## 3 high other contribution concrete  3.4    81.5
## 4 high other contribution abstract  3.46   82.2

How many are scale point 1 and high other contribution?

inter_df %>% 
  count(other_cat, category, IOS) %>% 
  filter(IOS == 1, other_cat == 'high other contribution')
## # A tibble: 1 × 4
##   other_cat               category   IOS     n
##   <fct>                   <fct>    <dbl> <int>
## 1 high other contribution abstract     1     2

7 Data visualization

Check closeness as a function of interactive versus non-interactive:

# Plot core:

condition_closeness_p <- dist_df %>% 
  ggplot(aes(x = closeness, fill = condition)) +
  geom_density(alpha = 0.55) +
  annotate(geom = 'text',
           label = 'interactive condition',
           color = 'purple3',
           x = 75.6, y = 0.03,
           hjust = 1, size = 4) +
  annotate(geom = 'text',
           label = 'non-interactive condition',
           color = 'grey55',
           x = 66.35, y = 0.020,
           hjust = 1, size = 4)

# Scales and axes:

condition_closeness_p <- condition_closeness_p +
  scale_x_continuous(expand = c(0, 0),
                     breaks = seq(0, 100, 25)) +
  scale_y_continuous(expand = c(0, 0),
                     limits = c(0, 0.06)) +
  scale_fill_manual(values = c('grey55', 'purple3')) +
  xlab('Closeness') +
  ylab('Density')

# Cosmetics:

condition_closeness_p <- condition_closeness_p +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show:

condition_closeness_p

ggsave('../figures_E1/condition_closeness.pdf', plot = condition_closeness_p,
       width = 5, height = 3.5)
ggsave('../figures_E1/condition_closeness.png', plot = condition_closeness_p,
       width = 5, height = 3.5)

Let’s visualize IOS as a function of condition.

# Plot core:

condition_ios_p <- df %>% 
  count(IOS, condition) %>% 
  ggplot(aes(x = IOS, y = n, fill = condition)) +
  geom_col(position = 'dodge', width = 0.7,
           col = 'black', size = 0.3) +
  annotate(geom = 'text',
           label = 'interactive condition',
           color = 'purple3',
           x = 4, y = 61 + 4.2,
           hjust = 0, size = 4) +
  annotate(geom = 'text',
           label = 'non-interactive condition',
           color = 'grey55',
           x = 1.65, y = 102 + 4.2,
           hjust = 0, size = 4)
## Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
## ℹ Please use `linewidth` instead.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.
# Scales and axes:

condition_ios_p <- condition_ios_p +
  scale_fill_manual(values = c('grey55', 'purple2')) +
  scale_x_continuous(expand = c(0, 0),
                     limits = c(0, 7),
                     breaks = 1:6) +
  scale_y_continuous(expand = c(0, 0),
                     limits = c(0, 120),
                     breaks = seq(0, 120, 20)) +
  ylab('Number of responses') +
  xlab('IOS (inclusion of other scale)')

# Cosmetic tweaking:

condition_ios_p <- condition_ios_p +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show in markdown and save in folder:

condition_ios_p

ggsave(plot = condition_ios_p,
       filename = '../figures_E1/condition_ios.pdf',
       width = 5, height = 3.5)
ggsave(plot = condition_ios_p,
       filename = '../figures_E1/condition_ios.png',
       width = 5, height = 3.5)

We’ll put both into one plot together:

# Add titles for differentiation:

condition_closeness_p <- condition_closeness_p + 
  ggtitle('a) Closeness by condition') +
  theme(title = element_text(face = 'bold',
                             size = 16))
condition_ios_p <- condition_ios_p + 
  ggtitle('b) Inclusion-of-other scale by condition') +
  theme(title = element_text(face = 'bold',
                             size = 16))

# Patch them together:

both_p <- condition_closeness_p + condition_ios_p +
  plot_layout(nrow = 1, ncol = 2, width = c(6, 6), heights = c(3, 3))

# Save the dobule plot:

ggsave(plot = both_p, filename = '../figures_E1/double_plot.pdf',
       width = 12, height = 4)
ggsave(plot = both_p, filename = '../figures_E1/double_plot.png',
       width = 12, height = 4)

Make a plot of IOS as a function of category (= concept type, abstract v concrete).

# Plot core:

category_ios_p <- df %>% 
  count(IOS, category) %>% 
  ggplot(aes(x = IOS, y = n, fill = category)) +
  geom_col(position = 'dodge', width = 0.7,
           col = 'black', size = 0.3)

# Scales and axes:

category_ios_p <- category_ios_p +
  scale_fill_manual(values = c('goldenrod3', 'steelblue')) +
  scale_x_continuous(expand = c(0, 0),
                     limits = c(0, 7),
                     breaks = 1:6) +
  scale_y_continuous(expand = c(0, 0),
                     limits = c(0, 100),
                     breaks = seq(0, 100, 20)) +
  ylab('Number of responses') +
  xlab('IOS (inclusion of other scale)')

# Cosmetic tweaking:

category_ios_p <- category_ios_p +
  theme_classic() +
  theme(legend.position = 'top',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show in markdown and save in folder:

category_ios_p

ggsave(plot = category_ios_p,
       filename = '../figures_E1/category_ios.pdf',
       width = 5, height = 3.5)
ggsave(plot = category_ios_p,
       filename = '../figures_E1/category_ios.png',
       width = 5, height = 3.5)

Make a plot of this with a facet wrap for the median split:

# Plot core:

median_split_p <- inter_df %>% 
  count(IOS, other_cat, category) %>% 
  ggplot(aes(x = IOS, y = n, fill = category)) +
  geom_bar(position = 'fill', width = 0.7,
           stat = 'identity',
           col = 'black', size = 0.3) +
  facet_wrap(~other_cat, ncol = 2)

# Scales and axes:

median_split_p <- median_split_p +
  scale_fill_manual(values = c('goldenrod3', 'steelblue')) +
  scale_x_continuous(breaks = 1:6) +
  scale_y_continuous(expand = c(0, 0)) +
  ylab('Proportion') +
  xlab('IOS (inclusion of other scale)')
  # + annotate(geom = 'text',
  #          label = 'concrete concepts',
  #          color = 'goldenrod3',
  #          x = 6.5, y = 0.86,
  #          hjust = 0, size = 4) +
  # annotate(geom = 'text',
  #          label = 'abstract concepts',
  #          color = 'steelblue',
  #          x = 6.5, y = 0.47,
  #          hjust = 0, size = 4)

# Cosmetic tweaking:

median_split_p <- median_split_p +
  theme_classic() +
  theme(legend.position = 'top',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)),
        plot.margin = margin(r = 30))

# Show in markdown and save in folder:

median_split_p

ggsave(plot = median_split_p,
       filename = '../figures_E1/median_split_p.pdf',
       width = 6, height = 3)
ggsave(plot = median_split_p,
       filename = '../figures_E1/median_split_p.png',
       width = 6, height = 3)

The issue with this figure is that the IOS = 1 within the high other contribution facet is just a few datapoints. Because the proportions are categorical, it makes it seem as if actually high other contribution has lower IOS for abstract.

… although, the fact that the predicted pattern is not readily available in this plot should make us hault as it already suggests this is not a strong result.

But anyway, let’s see what we can do with averages, even though averaging an ordinal scale is a bit contentious, but hey, if it helps us visualize this pattern! Let’s make a plot that has on the x-axis the covariate, and then we show as two separate lines the IOS average for abstract and concrete. Hopefully we should see these two lines as diverging. The other_contribution variable however is continuous, so in order to do averages across multiple values, let’s bin the scale into 20-40, 40-60, 60-80, and 80-100.

inter_df <- mutate(inter_df,
                   other_bin = case_when(other_contribution < 40 ~ '20-40',
                                         other_contribution >= 40 & other_contribution < 60 ~ '40-60',
                                         other_contribution >= 60 & other_contribution < 80 ~ '60-100',
                                         other_contribution >= 60 & other_contribution <= 100 ~ '80-100'))

Do a sanity check with means:

inter_df %>% 
  group_by(other_bin) %>% 
  summarize(M = mean(other_contribution))
## # A tibble: 4 × 2
##   other_bin     M
##   <chr>     <dbl>
## 1 20-40      28.3
## 2 40-60      50.9
## 3 60-100     71.4
## 4 80-100     92.1

Makers sense!

Next, we can compute the IOS average for abstract and concrete concepts by bind:

bin_avgs <- inter_df %>% 
  group_by(other_bin, category) %>% 
  summarize(IOS_M = mean(IOS),
            IOS_SD = sd(IOS))
## `summarise()` has grouped output by 'other_bin'. You can override using the
## `.groups` argument.
# Append counts so that we can compute simple standard errors of the mean:

bin_counts <- inter_df %>% 
  count(other_bin, category)

# Join together:

bin_avgs <- left_join(bin_avgs, bin_counts)
## Joining with `by = join_by(other_bin, category)`
# Compute standard errors:

bin_avgs <- mutate(bin_avgs, 
                   SE = IOS_SD / sqrt(n),
                   lower = IOS_M - SE,
                   upper = IOS_M + SE)

Let’s make a plot of this:

# Plot core:

bin_p <- bin_avgs %>% 
  ggplot(aes(x = other_bin, y = IOS_M,
             color = category, group = category)) +
  geom_line(size = 0.8) +
  geom_point(pch = 15, size = 2) +
  geom_errorbar(aes(ymin = lower, ymax = upper),
                width = 0)

# Scales and axes:

bin_p <- bin_p +
  scale_color_manual(values = c('goldenrod3', 'steelblue')) +
  scale_y_continuous(expand = c(0, 0),
                     limits = c(1, 4),
                     breaks = seq(1, 4, 0.5)) +
  ylab('IOS mean') +
  xlab('Other contribution (binned)') +
  annotate(geom = 'text',
           label = 'concrete concepts',
           color = 'goldenrod3',
           x = 1.9, y = 3.7,
           hjust = 1, size = 3) +
  annotate(geom = 'text',
           label = 'abstract concepts',
           color = 'steelblue',
           x = 1.3, y = 2.1,
           hjust = 0, size = 3)

# Cosmetic tweaking:

bin_p <- bin_p +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show in markdown and save in folder:

bin_p

ggsave(plot = bin_p,
       filename = '../figures_E1/IOS_bin_plot.pdf',
       width = 4.5, height = 3.4)
ggsave(plot = bin_p,
       filename = '../figures_E1/IOS_bin_plot.png',
       width = 4.5, height = 3.4)

The issue with this plot is that the other contribution covariate is heavily skewed towards positive values (check hist(df$other_contribution)), which means there’s much less values for lower values. That also explains why the standard errors are so much wider towards the left of the plot. Perhaps a more informative view is to split things into four equal sized groups.

inter_df <- mutate(inter_df,
                   other_bin2 = cut_number(other_contribution, 4))

# Redo the averages:

bin_avgs <- inter_df %>% 
  group_by(other_bin2, category) %>% 
  summarize(IOS_M = mean(IOS),
            IOS_SD = sd(IOS))
## `summarise()` has grouped output by 'other_bin2'. You can override using the
## `.groups` argument.
# Append counts so that we can compute simple standard errors of the mean:

bin_counts <- inter_df %>% 
  count(other_bin2, category)

# Join together:

bin_avgs <- left_join(bin_avgs, bin_counts)
## Joining with `by = join_by(other_bin2, category)`
# Compute standard errors:

bin_avgs <- mutate(bin_avgs, 
                   SE = IOS_SD / sqrt(n),
                   lower = IOS_M - SE,
                   upper = IOS_M + SE,
                   lower_CI = IOS_M - 1.96 * SE,
                   upper_CI = IOS_M + 1.96 * SE)

Let’s do that new bin plot:

# Plot core:

bin_p <- bin_avgs %>% 
  ggplot(aes(x = other_bin2, y = IOS_M,
             color = category, group = category)) +
  geom_line(size = 0.8) +
  geom_point(pch = 15, size = 2,
             position = position_dodge(width = 0.1)) +
  geom_errorbar(aes(ymin = lower_CI, ymax = upper_CI),
                width = 0,
                position = position_dodge(width = 0.1))

# Scales and axes:

bin_p <- bin_p +
  scale_color_manual(values = c('goldenrod3', 'steelblue')) +
  scale_y_continuous(expand = c(0, 0),
                     limits = c(2, 4.5),
                     breaks = seq(2, 4.5, 0.5)) +
  ylab('IOS mean') +
  xlab('Other contribution (binned)') +
  annotate(geom = 'text',
           label = 'concrete concepts',
           color = 'goldenrod3',
           x = 1.05, y = 3.9,
           hjust = 0, size = 3) +
  annotate(geom = 'text',
           label = 'abstract concepts',
           color = 'steelblue',
           x = 1.05, y = 2.7,
           hjust = 0, size = 3)

# Cosmetic tweaking:

bin_p <- bin_p +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show in markdown and save in folder:

bin_p

ggsave(plot = bin_p,
       filename = '../figures_E1/IOS_equal_bin_plot.pdf',
       width = 4.5, height = 3.4)
ggsave(plot = bin_p,
       filename = '../figures_E1/IOS_equal_bin_plot.png',
       width = 4.5, height = 3.4)

Looks better, and more interpretable.

8 Data visualization: covariates and IOS

Scatter plot matrix for covariates:

# Setings for diagonal:

diag_wrap <- wrap("densityDiag", alpha = 0.5,
                  fill = 'steelblue')

# Plot core:

scatter_p <- ggpairs(covs,
                     aes(alpha = 0.5),
                     diag = list(continuous = diag_wrap))

# Cosmetics:

scatter_p <- scatter_p +
  theme_minimal()

# Show in markdown and save in folder:

scatter_p

ggsave(plot = scatter_p,
       filename = '../figures_E1/covariate_matrix.pdf',
       width = 8, height = 8)
ggsave(plot = scatter_p,
       filename = '../figures_E1/covariate_matrix.png',
       width = 8, height = 8)

Pleasantness and IOS:

# Plot core:

pleasant_joy_p <- inter_df %>% 
  ggplot(aes(x = pleasantness, y = factor(IOS))) +
  geom_density_ridges(fill = 'steelblue',
                      jittered_points = TRUE,
                      position = position_points_jitter(width = 0.05,
                                                        height = 0),
                      point_shape = '|', point_size = 2,
                      point_alpha = 1, alpha = 0.7)

# Scales and axes:

pleasant_joy_p <- pleasant_joy_p +
  ylab('IOS\n(inclusion of other scale)') +
  xlab('Pleasantness') +
  scale_x_continuous(breaks = seq(0, 100, 20),
                     limits = c(-20, +120))

# Cosmetic tweaking:

pleasant_joy_p <- pleasant_joy_p +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show in markdown and save in folder:

pleasant_joy_p
## Picking joint bandwidth of 7.61

ggsave(plot = pleasant_joy_p,
       filename = '../figures_E1/pleasantness_joy.pdf',
       width = 5, height = 3.5)
## Picking joint bandwidth of 7.61
ggsave(plot = pleasant_joy_p,
       filename = '../figures_E1/pleasantness_joy.png',
       width = 5, height = 3.5)
## Picking joint bandwidth of 7.61

Commitment and IOS:

# Plot core:

commit_joy_p <- inter_df %>% 
  ggplot(aes(x = commitment, y = factor(IOS))) +
  geom_density_ridges(fill = 'steelblue',
                      jittered_points = TRUE,
                      position = position_points_jitter(width = 0.05,
                                                        height = 0),
                      point_shape = '|', point_size = 2,
                      point_alpha = 1, alpha = 0.7)

# Scales and axes:

commit_joy_p <- commit_joy_p +
  ylab('IOS\n(inclusion of other scale)') +
  xlab('Commitment') +
  scale_x_continuous(breaks = seq(0, 100, 20),
                     limits = c(-20, +120))

# Cosmetic tweaking:

commit_joy_p <- commit_joy_p +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show in markdown and save in folder:

commit_joy_p
## Picking joint bandwidth of 5.42

ggsave(plot = commit_joy_p,
       filename = '../figures_E1/commitment_joy.pdf',
       width = 4.5, height = 4)
## Picking joint bandwidth of 5.42
ggsave(plot = commit_joy_p,
       filename = '../figures_E1/commitment_joy.png',
       width = 4.5, height = 4)
## Picking joint bandwidth of 5.42

Intimacy and IOS:

# Plot core:

intimate_joy_p <- inter_df %>% 
  ggplot(aes(x = intimacy, y = factor(IOS))) +
  geom_density_ridges(fill = 'steelblue',
                      jittered_points = TRUE,
                      position = position_points_jitter(width = 0.05,
                                                        height = 0),
                      point_shape = '|', point_size = 2,
                      point_alpha = 1, alpha = 0.7)

# Scales and axes:

intimate_joy_p <- intimate_joy_p +
  ylab('IOS\n(inclusion of other scale)') +
  xlab('Intimacy') +
  scale_x_continuous(breaks = seq(0, 100, 20),
                     limits = c(-20, +120))

# Cosmetic tweaking:

intimate_joy_p <- intimate_joy_p +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show in markdown and save in folder:

intimate_joy_p
## Picking joint bandwidth of 8.95

ggsave(plot = intimate_joy_p,
       filename = '../figures_E1/intimacy_joy.pdf',
       width = 4.5, height = 4)
## Picking joint bandwidth of 8.95
ggsave(plot = intimate_joy_p,
       filename = '../figures_E1/intimacy_joy.png',
       width = 4.5, height = 4)
## Picking joint bandwidth of 8.95

Difficulty and IOS:

# Plot core:

difficult_joy_p <- inter_df %>% 
  ggplot(aes(x = difficulty, y = factor(IOS)))+
  geom_density_ridges(fill = 'steelblue',
                      jittered_points = TRUE,
                      position = position_points_jitter(width = 0.05,
                                                        height = 0),
                      point_shape = '|', point_size = 2,
                      point_alpha = 1, alpha = 0.7)

# Scales and axes:

difficult_joy_p <- difficult_joy_p +
  ylab('IOS\n(inclusion of other scale)') +
  xlab('Difficulty') +
  scale_x_continuous(breaks = seq(0, 100, 20),
                     limits = c(-20, +120))

# Cosmetic tweaking:

difficult_joy_p <- difficult_joy_p +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show in markdown and save in folder:

difficult_joy_p
## Picking joint bandwidth of 9.84

ggsave(plot = difficult_joy_p,
       filename = '../figures_E1/difficulty_joy.pdf',
       width = 4.5, height = 4)
## Picking joint bandwidth of 9.84
ggsave(plot = difficult_joy_p,
       filename = '../figures_E1/difficulty_joy.png',
       width = 4.5, height = 4)
## Picking joint bandwidth of 9.84

Self contribution and IOS:

# Plot core:

self_joy_p <- inter_df %>% 
  ggplot(aes(x = self_contribution, y = factor(IOS))) +
  geom_density_ridges(fill = 'steelblue',
                      jittered_points = TRUE,
                      position = position_points_jitter(width = 0.05,
                                                        height = 0),
                      point_shape = '|', point_size = 2,
                      point_alpha = 1, alpha = 0.7)

# Scales and axes:

self_joy_p <- self_joy_p +
  ylab('IOS\n(inclusion of other scale)') +
  xlab('Self-contribution') +
  scale_x_continuous(breaks = seq(0, 100, 20),
                     limits = c(-20, +120))

# Cosmetic tweaking:

self_joy_p <- self_joy_p +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show in markdown and save in folder:

self_joy_p
## Picking joint bandwidth of 8.49

ggsave(plot = self_joy_p,
       filename = '../figures_E1/self_joy.pdf',
       width = 4.5, height = 4)
## Picking joint bandwidth of 8.49
ggsave(plot = self_joy_p,
       filename = '../figures_E1/self_joy.png',
       width = 4.5, height = 4)
## Picking joint bandwidth of 8.49

Other contribution and IOS:

# Plot core:

other_joy_p <- inter_df %>% 
  ggplot(aes(x = other_contribution, y = factor(IOS))) +
  geom_density_ridges(fill = 'steelblue',
                      jittered_points = TRUE,
                      position = position_points_jitter(width = 0.05,
                                                        height = 0),
                      point_shape = '|', point_size = 2,
                      point_alpha = 1, alpha = 0.7)

# Scales and axes:

other_joy_p <- other_joy_p +
  ylab('IOS\n(inclusion of other scale)') +
  xlab('Other-contribution') +
  scale_x_continuous(breaks = seq(0, 100, 20),
                     limits = c(-20, +120))

# Cosmetic tweaking:

other_joy_p <- other_joy_p +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show in markdown and save in folder:

other_joy_p
## Picking joint bandwidth of 9.19

ggsave(plot = other_joy_p,
       filename = '../figures_E1/other_joy.pdf',
       width = 4.5, height = 4)
## Picking joint bandwidth of 9.19
ggsave(plot = other_joy_p,
       filename = '../figures_E1/other_joy.png',
       width = 4.5, height = 4)
## Picking joint bandwidth of 9.19

Make a plot matrix out of all of these. We only need y-axes for pleasantness and difficulty, which occur on the left-hand side, so we’ll have to switch off the axes for intimacy, difficulty, self-contribution and other-contribution.

# Define layout matrix:

my_layout <- matrix(c(1, 2, 3,
                      4, 5, 6),
                    byrow = TRUE, ncol = 3)

# Change titles:

commit_joy_p <- commit_joy_p +
  ylab(NULL)
intimate_joy_p <- intimate_joy_p +
  ylab(NULL)
self_joy_p <- self_joy_p +
  ylab(NULL)
other_joy_p <- other_joy_p +
  ylab(NULL)

# Show:

grid.arrange(pleasant_joy_p, commit_joy_p, intimate_joy_p,
             difficult_joy_p, self_joy_p, other_joy_p,
             layout_matrix = my_layout)

# Save:

all_joy <- arrangeGrob(pleasant_joy_p, commit_joy_p, intimate_joy_p,
                       difficult_joy_p, self_joy_p, other_joy_p,
                       layout_matrix = my_layout)

ggsave(all_joy, file = '../figures_E1/all_covariate_joy.pdf',
       width = 12, height = 6)
ggsave(all_joy, file = '../figures_E1/all_covariate_joy.png',
       width = 12, height = 6)

9 Data visualization: covariates and closeness

Scatterplot matrix of all variables. First, let’s start with pleasantness:

pleasant_scatter <- inter_df |> 
  ggplot(aes(x = pleasantness, y = closeness)) +
  geom_smooth(method = 'lm', col = 'purple',
              se = FALSE, size = 1.5) +
  geom_point()

# Scales and axes:

pleasant_scatter <- pleasant_scatter +
  ylab('Closeness') +
  xlab('Pleasantness') +
  scale_x_continuous(breaks = seq(0, 100, 20),
                     limits = c(-5, +105))

# Cosmetic tweaking:

pleasant_scatter <- pleasant_scatter +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show and save:

pleasant_scatter
## `geom_smooth()` using formula = 'y ~ x'

ggsave(filename = '../figures_E1/E1_pleasant_scatter.pdf', plot = pleasant_scatter,
       width = 4.5, height = 4)
## `geom_smooth()` using formula = 'y ~ x'
ggsave(filename = '../figures_E1/E1_pleasant_scatter.png', plot = pleasant_scatter,
       width = 4.5, height = 4)
## `geom_smooth()` using formula = 'y ~ x'

Second, commitment:

commitment_scatter <- inter_df |> 
  ggplot(aes(x = commitment, y = closeness)) +
  geom_smooth(method = 'lm', col = 'purple',
              se = FALSE, size = 1.5) +
  geom_point()

# Scales and axes:

commitment_scatter <- commitment_scatter +
  ylab('Closeness') +
  xlab('Pleasantness') +
  scale_x_continuous(breaks = seq(0, 100, 20),
                     limits = c(-5, +105))

# Cosmetic tweaking:

commitment_scatter <- commitment_scatter +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show and save:

commitment_scatter
## `geom_smooth()` using formula = 'y ~ x'

ggsave(filename = '../figures_E1/E1_commitment_scatter.pdf',
       plot = commitment_scatter,
       width = 4.5, height = 4)
## `geom_smooth()` using formula = 'y ~ x'
ggsave(filename = '../figures_E1/E1_commitment_scatter.png',
       plot = commitment_scatter,
       width = 4.5, height = 4)
## `geom_smooth()` using formula = 'y ~ x'

Third, intimacy:

intimacy_scatter <- inter_df |> 
  ggplot(aes(x = intimacy, y = closeness)) +
  geom_smooth(method = 'lm', col = 'purple',
              se = FALSE, size = 1.5) +
  geom_point()

# Scales and axes:

intimacy_scatter <- intimacy_scatter +
  ylab('Closeness') +
  xlab('Pleasantness') +
  scale_x_continuous(breaks = seq(0, 100, 20),
                     limits = c(-5, +105))

# Cosmetic tweaking:

intimacy_scatter <- intimacy_scatter +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show and save:

intimacy_scatter
## `geom_smooth()` using formula = 'y ~ x'

ggsave(filename = '../figures_E1/E1_intimacy_scatter.pdf',
       plot = intimacy_scatter,
       width = 4.5, height = 4)
## `geom_smooth()` using formula = 'y ~ x'
ggsave(filename = '../figures_E1/E1_intimacy_scatter.png',
       plot = intimacy_scatter,
       width = 4.5, height = 4)
## `geom_smooth()` using formula = 'y ~ x'

Fourth, difficulty:

difficulty_scatter <- inter_df |> 
  ggplot(aes(x = difficulty, y = closeness)) +
  geom_smooth(method = 'lm', col = 'purple',
              se = FALSE, size = 1.5) +
  geom_point()

# Scales and axes:

difficulty_scatter <- difficulty_scatter +
  ylab('Closeness') +
  xlab('Pleasantness') +
  scale_x_continuous(breaks = seq(0, 100, 20),
                     limits = c(-5, +105))

# Cosmetic tweaking:

difficulty_scatter <- difficulty_scatter +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show and save:

difficulty_scatter
## `geom_smooth()` using formula = 'y ~ x'

ggsave(filename = '../figures_E1/E1_difficulty_scatter.pdf',
       plot = difficulty_scatter,
       width = 4.5, height = 4)
## `geom_smooth()` using formula = 'y ~ x'
ggsave(filename = '../figures_E1/E1_difficulty_scatter.png',
       plot = difficulty_scatter,
       width = 4.5, height = 4)
## `geom_smooth()` using formula = 'y ~ x'

Fifth, self-contribution:

self_contribution_scatter <- inter_df |> 
  ggplot(aes(x = self_contribution, y = closeness)) +
  geom_smooth(method = 'lm', col = 'purple',
              se = FALSE, size = 1.5) +
  geom_point()

# Scales and axes:

self_contribution_scatter <- self_contribution_scatter +
  ylab('Closeness') +
  xlab('Pleasantness') +
  scale_x_continuous(breaks = seq(0, 100, 20),
                     limits = c(-5, +105))

# Cosmetic tweaking:

self_contribution_scatter <- self_contribution_scatter +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show and save:

self_contribution_scatter
## `geom_smooth()` using formula = 'y ~ x'

ggsave(filename = '../figures_E1/E1_self_contribution_scatter.pdf',
       plot = self_contribution_scatter,
       width = 4.5, height = 4)
## `geom_smooth()` using formula = 'y ~ x'
ggsave(filename = '../figures_E1/E1_self_contribution_scatter.png',
       plot = self_contribution_scatter,
       width = 4.5, height = 4)
## `geom_smooth()` using formula = 'y ~ x'

Sixth, other-contribution:

other_contribution_scatter <- inter_df |> 
  ggplot(aes(x = other_contribution, y = closeness)) +
  geom_smooth(method = 'lm', col = 'purple',
              se = FALSE, size = 1.5) +
  geom_point()

# Scales and axes:

other_contribution_scatter <- other_contribution_scatter +
  ylab('Closeness') +
  xlab('Pleasantness') +
  scale_x_continuous(breaks = seq(0, 100, 20),
                     limits = c(-5, +105))

# Cosmetic tweaking:

other_contribution_scatter <- other_contribution_scatter +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show and save:

other_contribution_scatter
## `geom_smooth()` using formula = 'y ~ x'

ggsave(filename = '../figures_E1/E1_other_contribution_scatter.pdf',
       plot = other_contribution_scatter,
       width = 4.5, height = 4)
## `geom_smooth()` using formula = 'y ~ x'
ggsave(filename = '../figures_E1/E1_other_contribution_scatter.png',
       plot = other_contribution_scatter,
       width = 4.5, height = 4)
## `geom_smooth()` using formula = 'y ~ x'

Put everything together into a big plot matrix:

# Define layout matrix:

my_layout <- matrix(c(1, 2, 3,
                      4, 5, 6),
                    byrow = TRUE, ncol = 3)

# Change titles:

commitment_scatter <- commitment_scatter +
  ylab(NULL)
intimacy_scatter <- intimacy_scatter +
  ylab(NULL)
self_contribution_scatter <- self_contribution_scatter +
  ylab(NULL)
other_contribution_scatter <- other_contribution_scatter +
  ylab(NULL)

# Show:

grid.arrange(pleasant_scatter, commitment_scatter, intimacy_scatter,
             difficulty_scatter, self_contribution_scatter, other_contribution_scatter,
             layout_matrix = my_layout)

# Save:

all_joy <- arrangeGrob(pleasant_scatter, commitment_scatter, intimacy_scatter,
                       difficulty_scatter, self_contribution_scatter, other_contribution_scatter,
                       layout_matrix = my_layout)

ggsave(all_joy, file = '../figures_E1/all_covariate_closeness.pdf',
       width = 12, height = 6)
ggsave(all_joy, file = '../figures_E1/all_covariate_closeness.png',
       width = 12, height = 6)

10 Statistical models

Let’s model IOS score as a function of interactive/non-interactive condition. We’ll use a cumulative mixed model here because 1) our response is ordinal, and 2) we need random effects for participants and items.

condition_IOS_mdl <- brm(IOS ~ 
                           
                           # Fixed effects:
                           
                           1 + condition  +
                           
                           # Random effects:
                           
                           (1 + condition|participant) +
                           (1 + condition|word),
           
                         data = df,
                         family = cumulative,
           
                         # MCMC settings:
           
                         seed = 42,
                         cores = 4,
                         iter = 6000,
                         warmup = 3000,
                         control = list(adapt_delta = 0.9))

# Save model:

save(condition_IOS_mdl, file = '../models_E1/condition_IOS_mdl.RData')

Load model:

load('../models_E1/condition_IOS_mdl.RData')

Show priors:

prior_summary(condition_IOS_mdl)
##                 prior     class                 coef       group resp dpar
##                (flat)         b                                           
##                (flat)         b conditioninteractive                      
##  student_t(3, 0, 2.5) Intercept                                           
##  student_t(3, 0, 2.5) Intercept                    1                      
##  student_t(3, 0, 2.5) Intercept                    2                      
##  student_t(3, 0, 2.5) Intercept                    3                      
##  student_t(3, 0, 2.5) Intercept                    4                      
##  student_t(3, 0, 2.5) Intercept                    5                      
##  lkj_corr_cholesky(1)         L                                           
##  lkj_corr_cholesky(1)         L                      participant          
##  lkj_corr_cholesky(1)         L                             word          
##  student_t(3, 0, 2.5)        sd                                           
##  student_t(3, 0, 2.5)        sd                      participant          
##  student_t(3, 0, 2.5)        sd conditioninteractive participant          
##  student_t(3, 0, 2.5)        sd            Intercept participant          
##  student_t(3, 0, 2.5)        sd                             word          
##  student_t(3, 0, 2.5)        sd conditioninteractive        word          
##  student_t(3, 0, 2.5)        sd            Intercept        word          
##  nlpar lb ub       source
##                   default
##              (vectorized)
##                   default
##              (vectorized)
##              (vectorized)
##              (vectorized)
##              (vectorized)
##              (vectorized)
##                   default
##              (vectorized)
##              (vectorized)
##         0         default
##         0    (vectorized)
##         0    (vectorized)
##         0    (vectorized)
##         0    (vectorized)
##         0    (vectorized)
##         0    (vectorized)

Perform posterior predictive checks of the ordinal model with ECDF overlay because it’s categorical.

pp_check(condition_IOS_mdl, ndraws = 100, type = 'ecdf_overlay')

Check this model:

condition_IOS_mdl
##  Family: cumulative 
##   Links: mu = logit; disc = identity 
## Formula: IOS ~ 1 + condition + (1 + condition | participant) + (1 + condition | word) 
##    Data: df (Number of observations: 504) 
##   Draws: 4 chains, each with iter = 6000; warmup = 3000; thin = 1;
##          total post-warmup draws = 12000
## 
## Group-Level Effects: 
## ~participant (Number of levels: 66) 
##                                     Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept)                           2.82      0.37     2.18     3.62 1.00
## sd(conditioninteractive)                2.77      0.38     2.10     3.58 1.00
## cor(Intercept,conditioninteractive)    -0.72      0.08    -0.85    -0.53 1.00
##                                     Bulk_ESS Tail_ESS
## sd(Intercept)                           3145     5656
## sd(conditioninteractive)                3016     5547
## cor(Intercept,conditioninteractive)     3921     6101
## 
## ~word (Number of levels: 16) 
##                                     Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept)                           0.21      0.16     0.01     0.61 1.00
## sd(conditioninteractive)                0.43      0.27     0.02     1.03 1.00
## cor(Intercept,conditioninteractive)    -0.26      0.56    -0.98     0.90 1.00
##                                     Bulk_ESS Tail_ESS
## sd(Intercept)                           5292     6833
## sd(conditioninteractive)                2920     4245
## cor(Intercept,conditioninteractive)     3666     5912
## 
## Population-Level Effects: 
##                      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
## Intercept[1]            -1.08      0.41    -1.88    -0.29 1.00     2378
## Intercept[2]             2.48      0.43     1.66     3.35 1.00     3155
## Intercept[3]             4.64      0.47     3.73     5.61 1.00     3617
## Intercept[4]             6.42      0.52     5.42     7.46 1.00     4070
## Intercept[5]             8.97      0.64     7.76    10.26 1.00     5173
## conditioninteractive     4.22      0.48     3.32     5.18 1.00     3922
##                      Tail_ESS
## Intercept[1]             4271
## Intercept[2]             4933
## Intercept[3]             5774
## Intercept[4]             6147
## Intercept[5]             7258
## conditioninteractive     6314
## 
## Family Specific Parameters: 
##      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## disc     1.00      0.00     1.00     1.00   NA       NA       NA
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

Visualize the posterior distribution of the main condition effect:

# Extract posterior values:

posts <- posterior_samples(condition_IOS_mdl) %>% 
  select(b_conditioninteractive)
## Warning: Method 'posterior_samples' is deprecated. Please see ?as_draws for
## recommended alternatives.
# Plot this:

posts %>% 
  ggplot(aes(x = b_conditioninteractive)) +
  geom_density(fill = 'steelblue', alpha = 0.8) +
  geom_vline(xintercept = 0, linetype = 2) +
  scale_y_continuous(expand = c(0, 0)) +
  theme_classic()

Perform hypothesis test on the condition fixed effects coefficient:

hypothesis(condition_IOS_mdl, 'conditioninteractive > 0')
## Hypothesis Tests for class b:
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (conditioninterac... > 0     4.22      0.48     3.46     5.02        Inf
##   Post.Prob Star
## 1         1    *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.

Now closeness as a function of condition. We’ll use the same random effects structure. We parametrize the beta according to mean/phi.

condition_dist_mdl <- brm(bf(closeness_01 ~ 
                               
                               # Fixed effects:
                               
                               1 + condition  +
                               
                               # Random effects:
                               
                               (1 + condition|participant) +
                               (1 + condition|word),
                             
                             # Family-specific parameter (shape):
                             
                             phi ~ 1 + condition),
                          
                          # General stuff:
           
                          data = dist_df, # averaged distance df
                          family = Beta,
           
                          # MCMC settings:
           
                          seed = 42,
                          cores = 4,
                          iter = 6000,
                          warmup = 3000,
                          control = list(adapt_delta = 0.95))

# Save model:

save(condition_dist_mdl, file = '../models_E1/condition_dist_mdl.RData')

Load model:

load('../models_E1/condition_dist_mdl.RData')

Show priors:

prior_summary(condition_dist_mdl)
##                 prior     class                 coef       group resp dpar
##                (flat)         b                                           
##                (flat)         b conditioninteractive                      
##                (flat)         b                                        phi
##                (flat)         b conditioninteractive                   phi
##  student_t(3, 0, 2.5) Intercept                                           
##  student_t(3, 0, 2.5) Intercept                                        phi
##  lkj_corr_cholesky(1)         L                                           
##  lkj_corr_cholesky(1)         L                      participant          
##  lkj_corr_cholesky(1)         L                             word          
##  student_t(3, 0, 2.5)        sd                                           
##  student_t(3, 0, 2.5)        sd                      participant          
##  student_t(3, 0, 2.5)        sd conditioninteractive participant          
##  student_t(3, 0, 2.5)        sd            Intercept participant          
##  student_t(3, 0, 2.5)        sd                             word          
##  student_t(3, 0, 2.5)        sd conditioninteractive        word          
##  student_t(3, 0, 2.5)        sd            Intercept        word          
##  nlpar lb ub       source
##                   default
##              (vectorized)
##                   default
##              (vectorized)
##                   default
##                   default
##                   default
##              (vectorized)
##              (vectorized)
##         0         default
##         0    (vectorized)
##         0    (vectorized)
##         0    (vectorized)
##         0    (vectorized)
##         0    (vectorized)
##         0    (vectorized)

Perform posterior predictive checks of the mixed beta regression:

pp_check(condition_dist_mdl, ndraws = 100)

Check this model:

condition_dist_mdl
##  Family: beta 
##   Links: mu = logit; phi = log 
## Formula: closeness_01 ~ 1 + condition + (1 + condition | participant) + (1 + condition | word) 
##          phi ~ 1 + condition
##    Data: dist_df (Number of observations: 504) 
##   Draws: 4 chains, each with iter = 6000; warmup = 3000; thin = 1;
##          total post-warmup draws = 12000
## 
## Group-Level Effects: 
## ~participant (Number of levels: 66) 
##                                     Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept)                           1.00      0.10     0.83     1.21 1.00
## sd(conditioninteractive)                0.88      0.10     0.70     1.08 1.00
## cor(Intercept,conditioninteractive)    -0.88      0.04    -0.94    -0.79 1.00
##                                     Bulk_ESS Tail_ESS
## sd(Intercept)                           1780     3449
## sd(conditioninteractive)                2326     4642
## cor(Intercept,conditioninteractive)     3083     5739
## 
## ~word (Number of levels: 16) 
##                                     Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept)                           0.03      0.03     0.00     0.10 1.00
## sd(conditioninteractive)                0.10      0.06     0.01     0.23 1.00
## cor(Intercept,conditioninteractive)    -0.10      0.58    -0.97     0.93 1.00
##                                     Bulk_ESS Tail_ESS
## sd(Intercept)                           6845     5311
## sd(conditioninteractive)                3082     4013
## cor(Intercept,conditioninteractive)     2758     5224
## 
## Population-Level Effects: 
##                          Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
## Intercept                    0.69      0.13     0.44     0.94 1.00      867
## phi_Intercept                3.25      0.10     3.04     3.44 1.00    10395
## conditioninteractive         0.72      0.12     0.48     0.97 1.00     1182
## phi_conditioninteractive    -0.05      0.15    -0.33     0.24 1.00    10017
##                          Tail_ESS
## Intercept                    2005
## phi_Intercept                8690
## conditioninteractive         3087
## phi_conditioninteractive     9695
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

Perform hypothesis test on the condition fixed effects coefficient:

hypothesis(condition_dist_mdl, 'conditioninteractive > 0')
## Hypothesis Tests for class b:
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (conditioninterac... > 0     0.72      0.12     0.52     0.93        Inf
##   Post.Prob Star
## 1         1    *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.

Make a model of all covariates on IOS. For this model, we will not include the random slope intercept correlations, as this would be many(!!) and lead to a super hefty random effects structure… plus, we’re not really interested in these correlations, and I cannot see immediately how they would be motivated. We have by-participant random slopes for all fixed effects, but will not do the same for word as this would again lead to a crazy complex model.

covariates_IOS_mdl <- brm(IOS ~ 
                            
                            # Fixed effects:
                            
                            1 +
                            pleasantness +
                            commitment +
                            intimacy +
                            difficulty + 
                            self_contribution +
                            other_contribution +
                            
                            # Random effects:
                            
                            (1|participant) +
                            (0 + pleasantness|participant) +
                            (0 + commitment|participant) +
                            (0 + intimacy|participant) +
                            (0 + difficulty|participant) +
                            (0 + self_contribution|participant) +
                            (0 + other_contribution|participant) +
                            (1|word),
                          
                          # General stuff:
           
                          data = inter_df,
                          family = cumulative,
           
                          # MCMC settings:
           
                          seed = 42,
                          cores = 4,
                          iter = 6000,
                          warmup = 3000,
                          control = list(adapt_delta = 0.9))

# Save model:

save(covariates_IOS_mdl,
     file = '../models_E1/covariates_IOS_mdl.RData')

Load model:

load('../models_E1/covariates_IOS_mdl.RData')

Show priors:

prior_summary(covariates_IOS_mdl)
##                 prior     class               coef       group resp dpar nlpar
##                (flat)         b                                               
##                (flat)         b         commitment                            
##                (flat)         b         difficulty                            
##                (flat)         b           intimacy                            
##                (flat)         b other_contribution                            
##                (flat)         b       pleasantness                            
##                (flat)         b  self_contribution                            
##  student_t(3, 0, 2.5) Intercept                                               
##  student_t(3, 0, 2.5) Intercept                  1                            
##  student_t(3, 0, 2.5) Intercept                  2                            
##  student_t(3, 0, 2.5) Intercept                  3                            
##  student_t(3, 0, 2.5) Intercept                  4                            
##  student_t(3, 0, 2.5) Intercept                  5                            
##  student_t(3, 0, 2.5)        sd                                               
##  student_t(3, 0, 2.5)        sd                    participant                
##  student_t(3, 0, 2.5)        sd         commitment participant                
##  student_t(3, 0, 2.5)        sd         difficulty participant                
##  student_t(3, 0, 2.5)        sd          Intercept participant                
##  student_t(3, 0, 2.5)        sd           intimacy participant                
##  student_t(3, 0, 2.5)        sd other_contribution participant                
##  student_t(3, 0, 2.5)        sd       pleasantness participant                
##  student_t(3, 0, 2.5)        sd  self_contribution participant                
##  student_t(3, 0, 2.5)        sd                           word                
##  student_t(3, 0, 2.5)        sd          Intercept        word                
##  lb ub       source
##             default
##        (vectorized)
##        (vectorized)
##        (vectorized)
##        (vectorized)
##        (vectorized)
##        (vectorized)
##             default
##        (vectorized)
##        (vectorized)
##        (vectorized)
##        (vectorized)
##        (vectorized)
##   0         default
##   0    (vectorized)
##   0    (vectorized)
##   0    (vectorized)
##   0    (vectorized)
##   0    (vectorized)
##   0    (vectorized)
##   0    (vectorized)
##   0    (vectorized)
##   0    (vectorized)
##   0    (vectorized)

Perform posterior predictive checks of the mixed beta regression:

pp_check(covariates_IOS_mdl, ndraws = 100, type = 'ecdf_overlay')

Check this model:

covariates_IOS_mdl
##  Family: cumulative 
##   Links: mu = logit; disc = identity 
## Formula: IOS ~ 1 + pleasantness + commitment + intimacy + difficulty + self_contribution + other_contribution + (1 | participant) + (0 + pleasantness | participant) + (0 + commitment | participant) + (0 + intimacy | participant) + (0 + difficulty | participant) + (0 + self_contribution | participant) + (0 + other_contribution | participant) + (1 | word) 
##    Data: inter_df (Number of observations: 248) 
##   Draws: 4 chains, each with iter = 6000; warmup = 3000; thin = 1;
##          total post-warmup draws = 12000
## 
## Group-Level Effects: 
## ~participant (Number of levels: 62) 
##                        Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
## sd(Intercept)              1.54      0.88     0.08     3.27 1.00     2235
## sd(pleasantness)           0.02      0.01     0.00     0.04 1.00     2726
## sd(commitment)             0.01      0.01     0.00     0.04 1.00     2920
## sd(intimacy)               0.01      0.01     0.00     0.03 1.00     4313
## sd(difficulty)             0.03      0.01     0.00     0.06 1.00     2479
## sd(self_contribution)      0.02      0.01     0.00     0.04 1.00     2503
## sd(other_contribution)     0.02      0.01     0.00     0.04 1.00     2675
##                        Tail_ESS
## sd(Intercept)              4989
## sd(pleasantness)           5413
## sd(commitment)             5716
## sd(intimacy)               7143
## sd(difficulty)             3076
## sd(self_contribution)      5914
## sd(other_contribution)     6286
## 
## ~word (Number of levels: 16) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     0.31      0.22     0.01     0.84 1.00     4985     7070
## 
## Population-Level Effects: 
##                    Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept[1]          -2.12      1.62    -5.36     1.03 1.00    12416    10308
## Intercept[2]           3.87      1.57     0.92     7.12 1.00     9839     9582
## Intercept[3]           6.90      1.69     3.77    10.47 1.00     8199     8175
## Intercept[4]          10.05      1.84     6.68    13.94 1.00     7201     7979
## Intercept[5]          14.09      2.06    10.35    18.39 1.00     6953     7708
## pleasantness           0.10      0.02     0.07     0.14 1.00     8315     7934
## commitment            -0.02      0.02    -0.05     0.01 1.00    16251    10172
## intimacy               0.02      0.01    -0.00     0.04 1.00    16048     8614
## difficulty            -0.02      0.01    -0.04     0.00 1.00    12047     8208
## self_contribution     -0.01      0.02    -0.05     0.02 1.00    17542    10343
## other_contribution     0.00      0.02    -0.03     0.03 1.00    13399     9663
## 
## Family Specific Parameters: 
##      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## disc     1.00      0.00     1.00     1.00   NA       NA       NA
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

Test all hypotheses from the covariate model:

hypothesis(covariates_IOS_mdl, 'pleasantness > 0')
## Hypothesis Tests for class b:
##           Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
## 1 (pleasantness) > 0      0.1      0.02     0.08     0.13        Inf         1
##   Star
## 1    *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(covariates_IOS_mdl, 'commitment > 0')
## Hypothesis Tests for class b:
##         Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
## 1 (commitment) > 0    -0.02      0.02    -0.04     0.01       0.16      0.14
##   Star
## 1     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(covariates_IOS_mdl, 'intimacy > 0')
## Hypothesis Tests for class b:
##       Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob Star
## 1 (intimacy) > 0     0.02      0.01        0     0.03      28.41      0.97    *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(covariates_IOS_mdl, 'difficulty < 0')
## Hypothesis Tests for class b:
##         Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
## 1 (difficulty) < 0    -0.02      0.01    -0.04        0      25.97      0.96
##   Star
## 1    *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(covariates_IOS_mdl, 'self_contribution > 0')
## Hypothesis Tests for class b:
##                Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (self_contribution) > 0    -0.01      0.02    -0.04     0.01       0.27
##   Post.Prob Star
## 1      0.21     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(covariates_IOS_mdl, 'other_contribution > 0')
## Hypothesis Tests for class b:
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (other_contribution) > 0        0      0.02    -0.02     0.03       1.29
##   Post.Prob Star
## 1      0.56     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.

Now closeness as a function of all covariates:

covariates_dist_mdl <- brm(bf(closeness_01 ~ 
                             
                                # Fixed effects:
                             
                                1 +
                                pleasantness +
                                commitment +
                                intimacy +
                                difficulty +
                                self_contribution +
                                other_contribution +
                             
                                # Random effects:
                               
                                (1|participant) +
                                (0 + pleasantness|participant) +
                                (0 + commitment|participant) +
                                (0 + intimacy|participant) +
                                (0 + difficulty|participant) +
                                (0 + self_contribution|participant) +
                                (0 + other_contribution|participant) +
                                (1|word),
                              
                              # Family-specific parameter (shape):
                              
                              phi ~ 1),
                           
                           # General stuff:

                           data = inter_df,
                           family = Beta,

                           # MCMC settings:

                           init = 0, # doesn't converge otherwise
                           seed = 42,
                           cores = 4,
                           iter = 6000,
                           warmup = 3000,
                           control = list(adapt_delta = 0.90))

# Save model:

save(covariates_dist_mdl, file = '../models_E1/covariates_dist_mdl.RData')

Load model:

load('../models_E1/covariates_dist_mdl.RData')

Show priors:

prior_summary(covariates_dist_mdl)
##                 prior     class               coef       group resp dpar nlpar
##                (flat)         b                                               
##                (flat)         b         commitment                            
##                (flat)         b         difficulty                            
##                (flat)         b           intimacy                            
##                (flat)         b other_contribution                            
##                (flat)         b       pleasantness                            
##                (flat)         b  self_contribution                            
##  student_t(3, 0, 2.5) Intercept                                               
##  student_t(3, 0, 2.5) Intercept                                      phi      
##  student_t(3, 0, 2.5)        sd                                               
##  student_t(3, 0, 2.5)        sd                    participant                
##  student_t(3, 0, 2.5)        sd         commitment participant                
##  student_t(3, 0, 2.5)        sd         difficulty participant                
##  student_t(3, 0, 2.5)        sd          Intercept participant                
##  student_t(3, 0, 2.5)        sd           intimacy participant                
##  student_t(3, 0, 2.5)        sd other_contribution participant                
##  student_t(3, 0, 2.5)        sd       pleasantness participant                
##  student_t(3, 0, 2.5)        sd  self_contribution participant                
##  student_t(3, 0, 2.5)        sd                           word                
##  student_t(3, 0, 2.5)        sd          Intercept        word                
##  lb ub       source
##             default
##        (vectorized)
##        (vectorized)
##        (vectorized)
##        (vectorized)
##        (vectorized)
##        (vectorized)
##             default
##             default
##   0         default
##   0    (vectorized)
##   0    (vectorized)
##   0    (vectorized)
##   0    (vectorized)
##   0    (vectorized)
##   0    (vectorized)
##   0    (vectorized)
##   0    (vectorized)
##   0    (vectorized)
##   0    (vectorized)

Perform posterior predictive checks of the mixed beta regression:

pp_check(covariates_dist_mdl,
         ndraws = 100, type = 'ecdf_overlay')

Check this model:

covariates_dist_mdl
##  Family: beta 
##   Links: mu = logit; phi = log 
## Formula: closeness_01 ~ 1 + pleasantness + commitment + intimacy + difficulty + self_contribution + other_contribution + (1 | participant) + (0 + pleasantness | participant) + (0 + commitment | participant) + (0 + intimacy | participant) + (0 + difficulty | participant) + (0 + self_contribution | participant) + (0 + other_contribution | participant) + (1 | word) 
##          phi ~ 1
##    Data: inter_df (Number of observations: 248) 
##   Draws: 4 chains, each with iter = 6000; warmup = 3000; thin = 1;
##          total post-warmup draws = 12000
## 
## Group-Level Effects: 
## ~participant (Number of levels: 62) 
##                        Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
## sd(Intercept)              0.34      0.09     0.12     0.51 1.00     2048
## sd(pleasantness)           0.00      0.00     0.00     0.00 1.00     4285
## sd(commitment)             0.00      0.00     0.00     0.00 1.00     2865
## sd(intimacy)               0.00      0.00     0.00     0.00 1.00     2537
## sd(difficulty)             0.01      0.00     0.01     0.02 1.00     3223
## sd(self_contribution)      0.00      0.00     0.00     0.00 1.00     1781
## sd(other_contribution)     0.00      0.00     0.00     0.00 1.00     3372
##                        Tail_ESS
## sd(Intercept)              1598
## sd(pleasantness)           6094
## sd(commitment)             4613
## sd(intimacy)               4654
## sd(difficulty)             6073
## sd(self_contribution)      2754
## sd(other_contribution)     5335
## 
## ~word (Number of levels: 16) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     0.05      0.04     0.00     0.14 1.00     4141     5114
## 
## Population-Level Effects: 
##                    Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept              0.62      0.27     0.10     1.14 1.00     9013     9599
## phi_Intercept          3.92      0.13     3.66     4.17 1.00     3915     7248
## pleasantness           0.01      0.00     0.01     0.02 1.00    10323     9420
## commitment            -0.00      0.00    -0.01     0.00 1.00    11520     9378
## intimacy               0.00      0.00    -0.00     0.00 1.00    14807     9996
## difficulty            -0.01      0.00    -0.01    -0.00 1.00     3117     5523
## self_contribution      0.00      0.00    -0.00     0.01 1.00    12088     8914
## other_contribution    -0.00      0.00    -0.01     0.00 1.00    12484     9980
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

Test all hypotheses from the covariate model:

hypothesis(covariates_dist_mdl, 'pleasantness > 0')
## Hypothesis Tests for class b:
##           Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
## 1 (pleasantness) > 0     0.01         0     0.01     0.02        Inf         1
##   Star
## 1    *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(covariates_dist_mdl, 'commitment > 0')
## Hypothesis Tests for class b:
##         Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
## 1 (commitment) > 0        0         0    -0.01        0       0.36      0.27
##   Star
## 1     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(covariates_dist_mdl, 'intimacy > 0')
## Hypothesis Tests for class b:
##       Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob Star
## 1 (intimacy) > 0        0         0        0        0       1.11      0.53     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(covariates_dist_mdl, 'difficulty < 0')
## Hypothesis Tests for class b:
##         Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
## 1 (difficulty) < 0    -0.01         0    -0.01        0      76.92      0.99
##   Star
## 1    *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(covariates_dist_mdl, 'self_contribution > 0')
## Hypothesis Tests for class b:
##                Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (self_contribution) > 0        0         0        0     0.01       1.43
##   Post.Prob Star
## 1      0.59     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(covariates_dist_mdl, 'other_contribution > 0')
## Hypothesis Tests for class b:
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (other_contribution) > 0        0         0    -0.01        0       0.48
##   Post.Prob Star
## 1      0.32     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.

11 Main model: interaction

Fit the category on IOS model:

main_IOS_mdl <- brm(IOS ~ 
                      
                      # Fixed effects:
                      
                      1 +
                      category +
                      other_contribution_c +
                      category:other_contribution_c +
                      
                      # Random effects:
                      
                      (1 +
                         category +
                         other_contribution_c +
                         category:other_contribution_c|participant) +
                      (1 + other_contribution_c|word),
           
                    data = inter_df,
                    family = cumulative,
           
                    # MCMC settings:
           
                    seed = 42,
                    cores = 4,
                    iter = 6000,
                    warmup = 3000,
                    control = list(adapt_delta = 0.9))

# Save model:

save(main_IOS_mdl, file = '../models_E1/main_IOS_mdl.RData')

Load model:

load('../models_E1/main_IOS_mdl.RData')

Show priors:

prior_summary(main_IOS_mdl)
##                 prior     class                                  coef
##                (flat)         b                                      
##                (flat)         b                      categoryabstract
##                (flat)         b categoryabstract:other_contribution_c
##                (flat)         b                  other_contribution_c
##  student_t(3, 0, 2.5) Intercept                                      
##  student_t(3, 0, 2.5) Intercept                                     1
##  student_t(3, 0, 2.5) Intercept                                     2
##  student_t(3, 0, 2.5) Intercept                                     3
##  student_t(3, 0, 2.5) Intercept                                     4
##  student_t(3, 0, 2.5) Intercept                                     5
##  lkj_corr_cholesky(1)         L                                      
##  lkj_corr_cholesky(1)         L                                      
##  lkj_corr_cholesky(1)         L                                      
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                      categoryabstract
##  student_t(3, 0, 2.5)        sd categoryabstract:other_contribution_c
##  student_t(3, 0, 2.5)        sd                             Intercept
##  student_t(3, 0, 2.5)        sd                  other_contribution_c
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                             Intercept
##  student_t(3, 0, 2.5)        sd                  other_contribution_c
##        group resp dpar nlpar lb ub       source
##                                         default
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                         default
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                         default
##  participant                       (vectorized)
##         word                       (vectorized)
##                               0         default
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##         word                  0    (vectorized)
##         word                  0    (vectorized)
##         word                  0    (vectorized)

Perform posterior predictive checks of the mixed beta regression:

pp_check(main_IOS_mdl, ndraws = 100)

Check this model:

main_IOS_mdl
##  Family: cumulative 
##   Links: mu = logit; disc = identity 
## Formula: IOS ~ 1 + category + other_contribution_c + category:other_contribution_c + (1 + category + other_contribution_c + category:other_contribution_c | participant) + (1 + other_contribution_c | word) 
##    Data: inter_df (Number of observations: 248) 
##   Draws: 4 chains, each with iter = 6000; warmup = 3000; thin = 1;
##          total post-warmup draws = 12000
## 
## Group-Level Effects: 
## ~participant (Number of levels: 62) 
##                                                                 Estimate
## sd(Intercept)                                                       3.10
## sd(categoryabstract)                                                2.47
## sd(other_contribution_c)                                            0.06
## sd(categoryabstract:other_contribution_c)                           0.03
## cor(Intercept,categoryabstract)                                    -0.09
## cor(Intercept,other_contribution_c)                                -0.13
## cor(categoryabstract,other_contribution_c)                          0.09
## cor(Intercept,categoryabstract:other_contribution_c)               -0.03
## cor(categoryabstract,categoryabstract:other_contribution_c)         0.01
## cor(other_contribution_c,categoryabstract:other_contribution_c)    -0.04
##                                                                 Est.Error
## sd(Intercept)                                                        0.54
## sd(categoryabstract)                                                 0.61
## sd(other_contribution_c)                                             0.03
## sd(categoryabstract:other_contribution_c)                            0.02
## cor(Intercept,categoryabstract)                                      0.21
## cor(Intercept,other_contribution_c)                                  0.29
## cor(categoryabstract,other_contribution_c)                           0.35
## cor(Intercept,categoryabstract:other_contribution_c)                 0.43
## cor(categoryabstract,categoryabstract:other_contribution_c)          0.44
## cor(other_contribution_c,categoryabstract:other_contribution_c)      0.45
##                                                                 l-95% CI
## sd(Intercept)                                                       2.14
## sd(categoryabstract)                                                1.33
## sd(other_contribution_c)                                            0.01
## sd(categoryabstract:other_contribution_c)                           0.00
## cor(Intercept,categoryabstract)                                    -0.46
## cor(Intercept,other_contribution_c)                                -0.70
## cor(categoryabstract,other_contribution_c)                         -0.55
## cor(Intercept,categoryabstract:other_contribution_c)               -0.81
## cor(categoryabstract,categoryabstract:other_contribution_c)        -0.80
## cor(other_contribution_c,categoryabstract:other_contribution_c)    -0.83
##                                                                 u-95% CI Rhat
## sd(Intercept)                                                       4.23 1.00
## sd(categoryabstract)                                                3.69 1.00
## sd(other_contribution_c)                                            0.13 1.00
## sd(categoryabstract:other_contribution_c)                           0.08 1.00
## cor(Intercept,categoryabstract)                                     0.36 1.00
## cor(Intercept,other_contribution_c)                                 0.45 1.00
## cor(categoryabstract,other_contribution_c)                          0.78 1.00
## cor(Intercept,categoryabstract:other_contribution_c)                0.79 1.00
## cor(categoryabstract,categoryabstract:other_contribution_c)         0.81 1.00
## cor(other_contribution_c,categoryabstract:other_contribution_c)     0.80 1.00
##                                                                 Bulk_ESS
## sd(Intercept)                                                       3020
## sd(categoryabstract)                                                2010
## sd(other_contribution_c)                                            1471
## sd(categoryabstract:other_contribution_c)                           5473
## cor(Intercept,categoryabstract)                                     5597
## cor(Intercept,other_contribution_c)                                 7590
## cor(categoryabstract,other_contribution_c)                          3643
## cor(Intercept,categoryabstract:other_contribution_c)               17724
## cor(categoryabstract,categoryabstract:other_contribution_c)        12980
## cor(other_contribution_c,categoryabstract:other_contribution_c)    11076
##                                                                 Tail_ESS
## sd(Intercept)                                                       5637
## sd(categoryabstract)                                                2197
## sd(other_contribution_c)                                            3020
## sd(categoryabstract:other_contribution_c)                           6792
## cor(Intercept,categoryabstract)                                     5981
## cor(Intercept,other_contribution_c)                                 6189
## cor(categoryabstract,other_contribution_c)                          5682
## cor(Intercept,categoryabstract:other_contribution_c)                9684
## cor(categoryabstract,categoryabstract:other_contribution_c)         9246
## cor(other_contribution_c,categoryabstract:other_contribution_c)    10487
## 
## ~word (Number of levels: 16) 
##                                     Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept)                           0.59      0.31     0.06     1.27 1.00
## sd(other_contribution_c)                0.02      0.01     0.00     0.04 1.00
## cor(Intercept,other_contribution_c)     0.26      0.54    -0.89     0.97 1.00
##                                     Bulk_ESS Tail_ESS
## sd(Intercept)                           2908     3716
## sd(other_contribution_c)                5288     6129
## cor(Intercept,other_contribution_c)    11387     9363
## 
## Population-Level Effects: 
##                                       Estimate Est.Error l-95% CI u-95% CI Rhat
## Intercept[1]                             -7.61      1.03    -9.80    -5.79 1.00
## Intercept[2]                             -2.37      0.58    -3.58    -1.31 1.00
## Intercept[3]                              0.36      0.53    -0.69     1.43 1.00
## Intercept[4]                              3.13      0.63     1.97     4.43 1.00
## Intercept[5]                              7.13      0.99     5.36     9.21 1.00
## categoryabstract                         -0.37      0.56    -1.48     0.73 1.00
## other_contribution_c                      0.03      0.02    -0.01     0.07 1.00
## categoryabstract:other_contribution_c     0.06      0.03     0.01     0.11 1.00
##                                       Bulk_ESS Tail_ESS
## Intercept[1]                              4549     6837
## Intercept[2]                              7039     8168
## Intercept[3]                              9474     9506
## Intercept[4]                              5398     7763
## Intercept[5]                              4113     7437
## categoryabstract                         10291     8578
## other_contribution_c                      8689     8330
## categoryabstract:other_contribution_c     6098     8275
## 
## Family Specific Parameters: 
##      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## disc     1.00      0.00     1.00     1.00   NA       NA       NA
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

Perform hypothesis tests on the fixed effects coefficients:

hypothesis(main_IOS_mdl, 'categoryabstract < 0')
## Hypothesis Tests for class b:
##               Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (categoryabstract) < 0    -0.37      0.56    -1.29     0.54        3.1
##   Post.Prob Star
## 1      0.76     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(main_IOS_mdl, 'other_contribution_c  > 0')
## Hypothesis Tests for class b:
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (other_contributi... > 0     0.03      0.02    -0.01     0.07      10.09
##   Post.Prob Star
## 1      0.91     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(main_IOS_mdl, 'categoryabstract:other_contribution_c  > 0')
## Hypothesis Tests for class b:
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (categoryabstract... > 0     0.06      0.03     0.02      0.1         99
##   Post.Prob Star
## 1      0.99    *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.

Plot these posteriors. For this we first need to extract the posteriors and then make them into long format:

# Extract:

posts <- posterior_samples(main_IOS_mdl) %>% 
  select(b_categoryabstract:`b_categoryabstract:other_contribution_c`)
## Warning: Method 'posterior_samples' is deprecated. Please see ?as_draws for
## recommended alternatives.
# Rename:

colnames(posts) <- c('abstractness', 'other contribution', 'interaction')

# Pivot into long format and clean:

posts <- posts %>% 
  pivot_longer(cols = abstractness:interaction,
               values_to = 'posterior_sample',
               names_to = 'coefficient') %>% 
  mutate(coefficient = factor(coefficient,
                              levels = c('abstractness',
                                         'other contribution',
                                         'interaction')))

# Show:

posts
## # A tibble: 36,000 × 2
##    coefficient        posterior_sample
##    <fct>                         <dbl>
##  1 abstractness                -0.558 
##  2 other contribution           0.0442
##  3 interaction                  0.0915
##  4 abstractness                 0.0186
##  5 other contribution           0.0202
##  6 interaction                  0.0709
##  7 abstractness                -0.661 
##  8 other contribution           0.0378
##  9 interaction                  0.0701
## 10 abstractness                -0.494 
## # ℹ 35,990 more rows

Now plot this:

main_posts_p <- posts %>% 
  ggplot(aes(x = posterior_sample, y = coefficient)) +
  geom_vline(xintercept = 0, linetype = 2) +
  stat_halfeye(fill = 'steelblue') +
  scale_x_continuous(limits = c(-1.5, 1.5)) +
  ylab(NULL) +
  theme_classic()

# Show and save:

main_posts_p
## Warning: Removed 292 rows containing missing values (`stat_slabinterval()`).

ggsave(main_posts_p, filename = '../figures_E1/main_posteriors.pdf',
       width = 5, height = 2)
## Warning: Removed 292 rows containing missing values (`stat_slabinterval()`).

Now a model of closeness, but first need to converte to [0, 1] variable for beta regression:

dist_inter_df <- mutate(dist_inter_df,
                        closeness_01 = closeness / 100)

Fit the model of closeness as a function of category, interacting with other_contribution_c:

category_dist_mdl <- brm(bf(closeness_01 ~ 
                              
                              # Fixed effects:
                              
                              1 +
                              category +
                              other_contribution_c +
                              difficulty_c +
                              category:other_contribution_c +
                              
                              # Random effects:
                              
                              (1 +
                                 category +
                                 other_contribution_c +
                                 category:other_contribution_c|participant) +
                              (1 + other_contribution_c|word),
                           
                           # Family-specific parameter (shape):
                           
                           phi ~ 1 + category),
           
                         data = dist_inter_df, # interactive only
                         family = Beta,
           
                         # MCMC settings:
           
                         seed = 42,
                         init = 0,
                         cores = 4,
                         iter = 6000,
                         warmup = 3000,
                         control = list(adapt_delta = 0.99))

# Save model:

save(category_dist_mdl, file = '../models_E1/category_dist_mdl.RData')

Load model:

load('../models_E1/category_dist_mdl.RData')

Show priors:

prior_summary(category_dist_mdl)
##                 prior     class                                  coef
##                (flat)         b                                      
##                (flat)         b                      categoryabstract
##                (flat)         b categoryabstract:other_contribution_c
##                (flat)         b                          difficulty_c
##                (flat)         b                  other_contribution_c
##                (flat)         b                                      
##                (flat)         b                      categoryabstract
##  student_t(3, 0, 2.5) Intercept                                      
##  student_t(3, 0, 2.5) Intercept                                      
##  lkj_corr_cholesky(1)         L                                      
##  lkj_corr_cholesky(1)         L                                      
##  lkj_corr_cholesky(1)         L                                      
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                      categoryabstract
##  student_t(3, 0, 2.5)        sd categoryabstract:other_contribution_c
##  student_t(3, 0, 2.5)        sd                             Intercept
##  student_t(3, 0, 2.5)        sd                  other_contribution_c
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                             Intercept
##  student_t(3, 0, 2.5)        sd                  other_contribution_c
##        group resp dpar nlpar lb ub       source
##                                         default
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                    phi                  default
##                    phi             (vectorized)
##                                         default
##                    phi                  default
##                                         default
##  participant                       (vectorized)
##         word                       (vectorized)
##                               0         default
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##         word                  0    (vectorized)
##         word                  0    (vectorized)
##         word                  0    (vectorized)

Perform posterior predictive checks of the mixed beta regression:

pp_check(category_dist_mdl, ndraws = 100)

Check this model:

category_dist_mdl
##  Family: beta 
##   Links: mu = logit; phi = log 
## Formula: closeness_01 ~ 1 + category + other_contribution_c + difficulty_c + category:other_contribution_c + (1 + category + other_contribution_c + category:other_contribution_c | participant) + (1 + other_contribution_c | word) 
##          phi ~ 1 + category
##    Data: dist_inter_df (Number of observations: 248) 
##   Draws: 4 chains, each with iter = 6000; warmup = 3000; thin = 1;
##          total post-warmup draws = 12000
## 
## Group-Level Effects: 
## ~participant (Number of levels: 62) 
##                                                                 Estimate
## sd(Intercept)                                                       0.38
## sd(categoryabstract)                                                0.53
## sd(other_contribution_c)                                            0.02
## sd(categoryabstract:other_contribution_c)                           0.01
## cor(Intercept,categoryabstract)                                     0.07
## cor(Intercept,other_contribution_c)                                -0.04
## cor(categoryabstract,other_contribution_c)                         -0.60
## cor(Intercept,categoryabstract:other_contribution_c)               -0.22
## cor(categoryabstract,categoryabstract:other_contribution_c)        -0.53
## cor(other_contribution_c,categoryabstract:other_contribution_c)     0.07
##                                                                 Est.Error
## sd(Intercept)                                                        0.07
## sd(categoryabstract)                                                 0.09
## sd(other_contribution_c)                                             0.00
## sd(categoryabstract:other_contribution_c)                            0.01
## cor(Intercept,categoryabstract)                                      0.26
## cor(Intercept,other_contribution_c)                                  0.22
## cor(categoryabstract,other_contribution_c)                           0.18
## cor(Intercept,categoryabstract:other_contribution_c)                 0.35
## cor(categoryabstract,categoryabstract:other_contribution_c)          0.31
## cor(other_contribution_c,categoryabstract:other_contribution_c)      0.37
##                                                                 l-95% CI
## sd(Intercept)                                                       0.25
## sd(categoryabstract)                                                0.37
## sd(other_contribution_c)                                            0.01
## sd(categoryabstract:other_contribution_c)                           0.00
## cor(Intercept,categoryabstract)                                    -0.38
## cor(Intercept,other_contribution_c)                                -0.45
## cor(categoryabstract,other_contribution_c)                         -0.89
## cor(Intercept,categoryabstract:other_contribution_c)               -0.82
## cor(categoryabstract,categoryabstract:other_contribution_c)        -0.92
## cor(other_contribution_c,categoryabstract:other_contribution_c)    -0.63
##                                                                 u-95% CI Rhat
## sd(Intercept)                                                       0.53 1.00
## sd(categoryabstract)                                                0.71 1.00
## sd(other_contribution_c)                                            0.03 1.00
## sd(categoryabstract:other_contribution_c)                           0.03 1.00
## cor(Intercept,categoryabstract)                                     0.62 1.00
## cor(Intercept,other_contribution_c)                                 0.39 1.00
## cor(categoryabstract,other_contribution_c)                         -0.17 1.00
## cor(Intercept,categoryabstract:other_contribution_c)                0.55 1.00
## cor(categoryabstract,categoryabstract:other_contribution_c)         0.30 1.00
## cor(other_contribution_c,categoryabstract:other_contribution_c)     0.78 1.00
##                                                                 Bulk_ESS
## sd(Intercept)                                                       2235
## sd(categoryabstract)                                                3507
## sd(other_contribution_c)                                            2517
## sd(categoryabstract:other_contribution_c)                           1913
## cor(Intercept,categoryabstract)                                     1694
## cor(Intercept,other_contribution_c)                                 3546
## cor(categoryabstract,other_contribution_c)                          3326
## cor(Intercept,categoryabstract:other_contribution_c)                3385
## cor(categoryabstract,categoryabstract:other_contribution_c)         5021
## cor(other_contribution_c,categoryabstract:other_contribution_c)     4476
##                                                                 Tail_ESS
## sd(Intercept)                                                       4495
## sd(categoryabstract)                                                6432
## sd(other_contribution_c)                                            4085
## sd(categoryabstract:other_contribution_c)                           3034
## cor(Intercept,categoryabstract)                                     2251
## cor(Intercept,other_contribution_c)                                 5875
## cor(categoryabstract,other_contribution_c)                          5692
## cor(Intercept,categoryabstract:other_contribution_c)                5680
## cor(categoryabstract,categoryabstract:other_contribution_c)         5356
## cor(other_contribution_c,categoryabstract:other_contribution_c)     5740
## 
## ~word (Number of levels: 16) 
##                                     Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept)                           0.07      0.05     0.00     0.18 1.00
## sd(other_contribution_c)                0.00      0.00     0.00     0.01 1.00
## cor(Intercept,other_contribution_c)     0.08      0.58    -0.94     0.96 1.00
##                                     Bulk_ESS Tail_ESS
## sd(Intercept)                           3643     5321
## sd(other_contribution_c)                6703     6416
## cor(Intercept,other_contribution_c)     9227     7388
## 
## Population-Level Effects: 
##                                       Estimate Est.Error l-95% CI u-95% CI Rhat
## Intercept                                 1.41      0.07     1.27     1.56 1.00
## phi_Intercept                             4.08      0.21     3.66     4.47 1.00
## categoryabstract                         -0.05      0.10    -0.25     0.15 1.00
## other_contribution_c                      0.01      0.00     0.00     0.02 1.00
## difficulty_c                             -0.01      0.00    -0.01    -0.00 1.00
## categoryabstract:other_contribution_c     0.01      0.00    -0.00     0.02 1.00
## phi_categoryabstract                     -0.40      0.28    -0.95     0.15 1.00
##                                       Bulk_ESS Tail_ESS
## Intercept                                 6815     8231
## phi_Intercept                             4253     5597
## categoryabstract                          6195     7890
## other_contribution_c                      6273     8187
## difficulty_c                             10055    10097
## categoryabstract:other_contribution_c     6793     8192
## phi_categoryabstract                      5502     7480
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

Perform hypothesis tests for fixed effects coefficients:

hypothesis(category_dist_mdl, 'categoryabstract > 0')
## Hypothesis Tests for class b:
##               Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (categoryabstract) > 0    -0.05       0.1    -0.22     0.12       0.44
##   Post.Prob Star
## 1      0.31     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(category_dist_mdl, 'other_contribution_c > 0')
## Hypothesis Tests for class b:
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (other_contributi... > 0     0.01         0        0     0.01      43.61
##   Post.Prob Star
## 1      0.98    *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(category_dist_mdl, 'categoryabstract:other_contribution_c > 0')
## Hypothesis Tests for class b:
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (categoryabstract... > 0     0.01         0        0     0.02      15.64
##   Post.Prob Star
## 1      0.94     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.

12 Main model with difficulty covariate

Calculate descriptive average of difficulty as a function of category (abstract v concrete):

inter_df %>% 
  filter(type_D == 'active') %>% 
  group_by(category) %>% 
  summarize(M = mean(difficulty))
## # A tibble: 2 × 2
##   category     M
##   <fct>    <dbl>
## 1 concrete  32.6
## 2 abstract  36.6

Make a graph of this.

# Plot core:

category_difficulty_p <- inter_df %>% 
  filter(type_D == 'active') %>% 
  ggplot(aes(x = difficulty, fill = category)) +
  geom_density(alpha = 0.55) +
  annotate(geom = 'text',
           label = 'concrete concepts',
           color = 'goldenrod3',
           x = 39, y = 0.011,
           hjust = 0, size = 4) +
  annotate(geom = 'text',
           label = 'abstract concepts',
           color = 'steelblue',
           x = 68, y = 0.0075,
           hjust = 0, size = 4)

# Scales and axes:

category_difficulty_p <- category_difficulty_p +
  scale_x_continuous(expand = c(0, 0),
                     breaks = seq(0, 100, 25)) +
  scale_y_continuous(expand = c(0, 0),
                     limits = c(0, 0.02)) +
  scale_fill_manual(values = c('goldenrod3', 'steelblue')) +
  xlab('Difficulty') +
  ylab('Density')

# Cosmetics:

category_difficulty_p <- category_difficulty_p +
  theme_classic() +
  theme(legend.position = 'none',
        legend.title = element_blank(),
        axis.title = element_text(face = 'bold',
                                  size = 12.5),
        axis.title.x = element_text(margin = margin(t = 6.5)),
        axis.title.y = element_text(margin = margin(r = 10)))

# Show:

category_difficulty_p

ggsave('../figures_E1/category_difficulty.pdf',
       plot = category_difficulty_p,
       width = 5, height = 3.5)

Since difficulty is now the dependent measure, convert it into [0, 1] for beta regression:

inter_df <- mutate(inter_df,
                   difficulty_01 = difficulty / 100)

Model difficulty as a function of category with a beta regression model then:

difficulty_mdl <- brm(bf(difficulty_01 ~ 
                           
                           # Fixed effects:
                           
                           1 +
                           category +
                           
                           # Random effects:
                           
                           (1 + category|participant) +
                           (1|word),
                        
                         # Family-specific parameter (shape):
                        
                         phi ~ 1 + category),
           
                      data = inter_df, # interactive only
                      family = Beta,
           
                      # MCMC settings:
           
                      seed = 42,
                      init = 0,
                      cores = 4,
                      iter = 6000,
                      warmup = 3000,
                      control = list(adapt_delta = 0.99))

# Save model:

save(difficulty_mdl, file = '../models_E1/difficulty_mdl.RData')

Load model:

load('../models_E1/difficulty_mdl.RData')

Show priors:

prior_summary(difficulty_mdl)
##                 prior     class             coef       group resp dpar nlpar lb
##                (flat)         b                                                
##                (flat)         b categoryabstract                               
##                (flat)         b                                    phi         
##                (flat)         b categoryabstract                   phi         
##  student_t(3, 0, 2.5) Intercept                                                
##  student_t(3, 0, 2.5) Intercept                                    phi         
##  lkj_corr_cholesky(1)         L                                                
##  lkj_corr_cholesky(1)         L                  participant                   
##  student_t(3, 0, 2.5)        sd                                               0
##  student_t(3, 0, 2.5)        sd                  participant                  0
##  student_t(3, 0, 2.5)        sd categoryabstract participant                  0
##  student_t(3, 0, 2.5)        sd        Intercept participant                  0
##  student_t(3, 0, 2.5)        sd                         word                  0
##  student_t(3, 0, 2.5)        sd        Intercept        word                  0
##  ub       source
##          default
##     (vectorized)
##          default
##     (vectorized)
##          default
##          default
##          default
##     (vectorized)
##          default
##     (vectorized)
##     (vectorized)
##     (vectorized)
##     (vectorized)
##     (vectorized)

Perform posterior predictive checks of the mixed beta regression:

pp_check(difficulty_mdl, ndraws = 100)

Check this model:

difficulty_mdl
##  Family: beta 
##   Links: mu = logit; phi = log 
## Formula: difficulty_01 ~ 1 + category + (1 + category | participant) + (1 | word) 
##          phi ~ 1 + category
##    Data: inter_df (Number of observations: 248) 
##   Draws: 4 chains, each with iter = 6000; warmup = 3000; thin = 1;
##          total post-warmup draws = 12000
## 
## Group-Level Effects: 
## ~participant (Number of levels: 62) 
##                                 Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept)                       0.80      0.16     0.51     1.13 1.00
## sd(categoryabstract)                0.98      0.19     0.62     1.38 1.00
## cor(Intercept,categoryabstract)     0.45      0.29    -0.13     0.95 1.00
##                                 Bulk_ESS Tail_ESS
## sd(Intercept)                       3735     7454
## sd(categoryabstract)                2596     5155
## cor(Intercept,categoryabstract)     1680     2286
## 
## ~word (Number of levels: 16) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     0.17      0.11     0.01     0.42 1.00     3919     6910
## 
## Population-Level Effects: 
##                      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
## Intercept               -0.71      0.16    -1.03    -0.40 1.00     7846
## phi_Intercept            1.05      0.17     0.73     1.40 1.00     3506
## categoryabstract         0.07      0.20    -0.34     0.47 1.00    10320
## phi_categoryabstract     0.81      0.26     0.27     1.30 1.00     5676
##                      Tail_ESS
## Intercept                8604
## phi_Intercept            6322
## categoryabstract         9473
## phi_categoryabstract     7905
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

Perform hypothesis test on the category fixed effects coefficient:

hypothesis(difficulty_mdl, 'categoryabstract > 0')
## Hypothesis Tests for class b:
##               Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (categoryabstract) > 0     0.07       0.2    -0.27      0.4       1.72
##   Post.Prob Star
## 1      0.63     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.

Do the main analysis of IOS with difficulty as a covariate predictor, to control for this.

main_IOS_difficulty_mdl <- brm(IOS ~ 
                                 
                                 # Fixed effects:
                                 
                                 1 +
                                 category +
                                 other_contribution_c +
                                 difficulty_c + # new
                                 category:other_contribution_c +
                                 
                                 # Random effects:
                                 
                                 (1 +
                                    category +
                                    other_contribution_c +
                                    category:other_contribution_c|participant) +
                                 (1 + other_contribution_c|word),
           
                               data = inter_df,
                               family = cumulative,
           
                               # MCMC settings:
                               
                               seed = 42,
                               cores = 4,
                               iter = 6000,
                               warmup = 3000,
                               save_pars = save_pars(all = TRUE), # for bayes factors
                               control = list(adapt_delta = 0.99))

# Save model:

save(main_IOS_difficulty_mdl,
     file = '../models_E1/main_IOS_difficulty_mdl.RData')

Corresponding null model:

main_IOS_difficulty_null <- brm(IOS ~ 
                                 
                                 # Fixed effects:
                                 
                                 1 +
                                 category +
                                 other_contribution_c +
                                 difficulty_c + # new
                                 
                                 # Random effects:
                                 
                                 (1 +
                                    category +
                                    other_contribution_c +
                                    category:other_contribution_c|participant) +
                                 (1 + other_contribution_c|word),
           
                               data = inter_df,
                               family = cumulative,
           
                               # MCMC settings:
                               
                               seed = 42,
                               cores = 4,
                               iter = 6000,
                               warmup = 3000,
                               save_pars = save_pars(all = TRUE), # for bayes factors
                               control = list(adapt_delta = 0.99))

# Save model:

save(main_IOS_difficulty_null,
     file = '../models_E1/main_IOS_difficulty_null.RData')

Load model:

load('../models_E1/main_IOS_difficulty_mdl.RData')
load('../models_E1/main_IOS_difficulty_null.RData')

Show priors:

prior_summary(main_IOS_difficulty_mdl)
##                 prior     class                                  coef
##                (flat)         b                                      
##                (flat)         b                      categoryabstract
##                (flat)         b categoryabstract:other_contribution_c
##                (flat)         b                          difficulty_c
##                (flat)         b                  other_contribution_c
##  student_t(3, 0, 2.5) Intercept                                      
##  student_t(3, 0, 2.5) Intercept                                     1
##  student_t(3, 0, 2.5) Intercept                                     2
##  student_t(3, 0, 2.5) Intercept                                     3
##  student_t(3, 0, 2.5) Intercept                                     4
##  student_t(3, 0, 2.5) Intercept                                     5
##  lkj_corr_cholesky(1)         L                                      
##  lkj_corr_cholesky(1)         L                                      
##  lkj_corr_cholesky(1)         L                                      
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                      categoryabstract
##  student_t(3, 0, 2.5)        sd categoryabstract:other_contribution_c
##  student_t(3, 0, 2.5)        sd                             Intercept
##  student_t(3, 0, 2.5)        sd                  other_contribution_c
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                             Intercept
##  student_t(3, 0, 2.5)        sd                  other_contribution_c
##        group resp dpar nlpar lb ub       source
##                                         default
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                         default
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                         default
##  participant                       (vectorized)
##         word                       (vectorized)
##                               0         default
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##         word                  0    (vectorized)
##         word                  0    (vectorized)
##         word                  0    (vectorized)
prior_summary(main_IOS_difficulty_null)
##                 prior     class                                  coef
##                (flat)         b                                      
##                (flat)         b                      categoryabstract
##                (flat)         b                          difficulty_c
##                (flat)         b                  other_contribution_c
##  student_t(3, 0, 2.5) Intercept                                      
##  student_t(3, 0, 2.5) Intercept                                     1
##  student_t(3, 0, 2.5) Intercept                                     2
##  student_t(3, 0, 2.5) Intercept                                     3
##  student_t(3, 0, 2.5) Intercept                                     4
##  student_t(3, 0, 2.5) Intercept                                     5
##  lkj_corr_cholesky(1)         L                                      
##  lkj_corr_cholesky(1)         L                                      
##  lkj_corr_cholesky(1)         L                                      
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                      categoryabstract
##  student_t(3, 0, 2.5)        sd categoryabstract:other_contribution_c
##  student_t(3, 0, 2.5)        sd                             Intercept
##  student_t(3, 0, 2.5)        sd                  other_contribution_c
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                             Intercept
##  student_t(3, 0, 2.5)        sd                  other_contribution_c
##        group resp dpar nlpar lb ub       source
##                                         default
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                         default
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                         default
##  participant                       (vectorized)
##         word                       (vectorized)
##                               0         default
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##         word                  0    (vectorized)
##         word                  0    (vectorized)
##         word                  0    (vectorized)

Bayes factors for both:

# Compute:

IOS_bf <- bayes_factor(main_IOS_difficulty_mdl, main_IOS_difficulty_null)

# Save:

save(IOS_bf,
     file = '../models_E1/IOS_bf.RData')

Show Bayes factor:

# Load:

load('../models_E1/IOS_bf.RData')

# Show:

IOS_bf
## Estimated Bayes factor in favor of main_IOS_difficulty_mdl over main_IOS_difficulty_null: 0.58811

A Bayes factor larger than 3 is taken as strong evidence of model A over B (in this case, the full model over the null model). Conversely, a Bayes factor smaller than 1/3 is taken as strong evidence of B over A. This one is ~0.5, which is slightly leaning towards the null, but inconclusive. So, despite the posterior values being reliably non-zero, when we look at the full model and assess the hypothesis that the interaction coefficient matters, there is weak, but inconclusive, evidence against the interaction already in Experiment 1. So yes, the coefficient is non-zero, but it’s not really needed: the null model is already slightly better at explaining the data for E1.

Compute R-squared:

bayes_R2(main_IOS_difficulty_mdl)
## Warning: Predictions are treated as continuous variables in 'bayes_R2' which is
## likely invalid for ordinal families.
##     Estimate  Est.Error      Q2.5    Q97.5
## R2 0.7650374 0.03003084 0.6965933 0.817708
bayes_R2(main_IOS_difficulty_null)
## Warning: Predictions are treated as continuous variables in 'bayes_R2' which is
## likely invalid for ordinal families.
##     Estimate  Est.Error      Q2.5     Q97.5
## R2 0.7527548 0.03219429 0.6821758 0.8084409
# Check:

0.7650374 - 0.7527548
## [1] 0.0122826

Perform posterior predictive checks of the mixed beta regression:

pp_check(main_IOS_difficulty_mdl, ndraws = 100)

Check this model:

main_IOS_difficulty_mdl
##  Family: cumulative 
##   Links: mu = logit; disc = identity 
## Formula: IOS ~ 1 + category + other_contribution_c + difficulty_c + category:other_contribution_c + (1 + category + other_contribution_c + category:other_contribution_c | participant) + (1 + other_contribution_c | word) 
##    Data: inter_df (Number of observations: 248) 
##   Draws: 4 chains, each with iter = 6000; warmup = 3000; thin = 1;
##          total post-warmup draws = 12000
## 
## Group-Level Effects: 
## ~participant (Number of levels: 62) 
##                                                                 Estimate
## sd(Intercept)                                                       3.41
## sd(categoryabstract)                                                2.36
## sd(other_contribution_c)                                            0.06
## sd(categoryabstract:other_contribution_c)                           0.03
## cor(Intercept,categoryabstract)                                    -0.07
## cor(Intercept,other_contribution_c)                                -0.06
## cor(categoryabstract,other_contribution_c)                          0.08
## cor(Intercept,categoryabstract:other_contribution_c)               -0.02
## cor(categoryabstract,categoryabstract:other_contribution_c)         0.08
## cor(other_contribution_c,categoryabstract:other_contribution_c)    -0.03
##                                                                 Est.Error
## sd(Intercept)                                                        0.56
## sd(categoryabstract)                                                 0.61
## sd(other_contribution_c)                                             0.04
## sd(categoryabstract:other_contribution_c)                            0.02
## cor(Intercept,categoryabstract)                                      0.21
## cor(Intercept,other_contribution_c)                                  0.30
## cor(categoryabstract,other_contribution_c)                           0.38
## cor(Intercept,categoryabstract:other_contribution_c)                 0.43
## cor(categoryabstract,categoryabstract:other_contribution_c)          0.43
## cor(other_contribution_c,categoryabstract:other_contribution_c)      0.45
##                                                                 l-95% CI
## sd(Intercept)                                                       2.42
## sd(categoryabstract)                                                1.19
## sd(other_contribution_c)                                            0.00
## sd(categoryabstract:other_contribution_c)                           0.00
## cor(Intercept,categoryabstract)                                    -0.45
## cor(Intercept,other_contribution_c)                                -0.67
## cor(categoryabstract,other_contribution_c)                         -0.62
## cor(Intercept,categoryabstract:other_contribution_c)               -0.80
## cor(categoryabstract,categoryabstract:other_contribution_c)        -0.76
## cor(other_contribution_c,categoryabstract:other_contribution_c)    -0.83
##                                                                 u-95% CI Rhat
## sd(Intercept)                                                       4.63 1.00
## sd(categoryabstract)                                                3.57 1.00
## sd(other_contribution_c)                                            0.14 1.00
## sd(categoryabstract:other_contribution_c)                           0.09 1.00
## cor(Intercept,categoryabstract)                                     0.38 1.00
## cor(Intercept,other_contribution_c)                                 0.52 1.00
## cor(categoryabstract,other_contribution_c)                          0.80 1.00
## cor(Intercept,categoryabstract:other_contribution_c)                0.79 1.00
## cor(categoryabstract,categoryabstract:other_contribution_c)         0.83 1.00
## cor(other_contribution_c,categoryabstract:other_contribution_c)     0.79 1.00
##                                                                 Bulk_ESS
## sd(Intercept)                                                       2448
## sd(categoryabstract)                                                1707
## sd(other_contribution_c)                                             848
## sd(categoryabstract:other_contribution_c)                           3436
## cor(Intercept,categoryabstract)                                     4544
## cor(Intercept,other_contribution_c)                                 5170
## cor(categoryabstract,other_contribution_c)                          2237
## cor(Intercept,categoryabstract:other_contribution_c)                9768
## cor(categoryabstract,categoryabstract:other_contribution_c)         8184
## cor(other_contribution_c,categoryabstract:other_contribution_c)     7876
##                                                                 Tail_ESS
## sd(Intercept)                                                       4352
## sd(categoryabstract)                                                2154
## sd(other_contribution_c)                                            2320
## sd(categoryabstract:other_contribution_c)                           4631
## cor(Intercept,categoryabstract)                                     5323
## cor(Intercept,other_contribution_c)                                 4609
## cor(categoryabstract,other_contribution_c)                          4573
## cor(Intercept,categoryabstract:other_contribution_c)                8300
## cor(categoryabstract,categoryabstract:other_contribution_c)         7717
## cor(other_contribution_c,categoryabstract:other_contribution_c)     9059
## 
## ~word (Number of levels: 16) 
##                                     Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept)                           0.47      0.29     0.03     1.11 1.00
## sd(other_contribution_c)                0.02      0.01     0.00     0.04 1.00
## cor(Intercept,other_contribution_c)     0.23      0.55    -0.89     0.97 1.00
##                                     Bulk_ESS Tail_ESS
## sd(Intercept)                           2460     3958
## sd(other_contribution_c)                4175     5285
## cor(Intercept,other_contribution_c)     6103     6462
## 
## Population-Level Effects: 
##                                       Estimate Est.Error l-95% CI u-95% CI Rhat
## Intercept[1]                             -8.13      1.11   -10.46    -6.17 1.00
## Intercept[2]                             -2.48      0.60    -3.73    -1.33 1.00
## Intercept[3]                              0.46      0.56    -0.60     1.60 1.00
## Intercept[4]                              3.43      0.67     2.21     4.86 1.00
## Intercept[5]                              7.61      1.05     5.74     9.89 1.00
## categoryabstract                         -0.21      0.52    -1.26     0.81 1.00
## other_contribution_c                      0.03      0.02    -0.01     0.07 1.00
## difficulty_c                             -0.04      0.01    -0.06    -0.02 1.00
## categoryabstract:other_contribution_c     0.06      0.03     0.01     0.11 1.00
##                                       Bulk_ESS Tail_ESS
## Intercept[1]                              2558     4893
## Intercept[2]                              3946     6336
## Intercept[3]                              4360     6264
## Intercept[4]                              3014     4483
## Intercept[5]                              2582     4831
## categoryabstract                          5877     6793
## other_contribution_c                      4818     6349
## difficulty_c                              5073     6511
## categoryabstract:other_contribution_c     4445     5202
## 
## Family Specific Parameters: 
##      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## disc     1.00      0.00     1.00     1.00   NA       NA       NA
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

Perform hypothesis tests on the fixed effects coefficient:

hypothesis(main_IOS_difficulty_mdl,
           'categoryabstract < 0')
## Hypothesis Tests for class b:
##               Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (categoryabstract) < 0    -0.21      0.52    -1.07     0.64       2.02
##   Post.Prob Star
## 1      0.67     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(main_IOS_difficulty_mdl,
           'other_contribution_c > 0')
## Hypothesis Tests for class b:
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (other_contributi... > 0     0.03      0.02    -0.01     0.07      10.06
##   Post.Prob Star
## 1      0.91     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(main_IOS_difficulty_mdl,
           'categoryabstract:other_contribution_c  > 0')
## Hypothesis Tests for class b:
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (categoryabstract... > 0     0.06      0.03     0.02      0.1      92.02
##   Post.Prob Star
## 1      0.99    *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.

Re-do the other main model, the one with closeness_01 as dependent variable, this time also with a difficulty_c control covariate:

category_dist_difficulty_mdl <- brm(bf(closeness_01 ~ 
                                      
                                      # Fixed effects:
                                      
                                      1 +
                                      category +
                                      other_contribution_c +
                                      difficulty_c + # new
                                      category:other_contribution_c +
                                      
                                      # Random effects:
                                      
                                      (1 +
                                         category +
                                         other_contribution_c +
                                         category:other_contribution_c|participant) +
                                      (1 + other_contribution_c|word),
                                      phi ~ 1 + category),
           
                                    data = dist_inter_df, # interactive only
                                    family = Beta,
           
                                    # MCMC settings:
           
                                    seed = 42,
                                    init = 0,
                                    cores = 4,
                                    iter = 6000,
                                    warmup = 3000,
                                    save_pars = save_pars(all = TRUE), # for bayes factors
                                    control = list(adapt_delta = 0.99))

# Save model:

save(category_dist_difficulty_mdl,
     file = '../models_E1/category_dist_difficulty_mdl.RData')

Corresponding null model:

category_dist_difficulty_null <- brm(bf(closeness_01 ~ 
                                      
                                      # Fixed effects:
                                      
                                      1 +
                                      category +
                                      other_contribution_c +
                                      difficulty_c + # new
                                      
                                      # Random effects:
                                      
                                      (1 +
                                         category +
                                         other_contribution_c +
                                         category:other_contribution_c|participant) +
                                      (1 + other_contribution_c|word),
                                      phi ~ 1 + category),
           
                                    data = dist_inter_df, # interactive only
                                    family = Beta,
           
                                    # MCMC settings:
           
                                    seed = 42,
                                    init = 0,
                                    cores = 4,
                                    iter = 6000,
                                    warmup = 3000,
                                    save_pars = save_pars(all = TRUE), # for bayes factors
                                    control = list(adapt_delta = 0.99))

# Save model:

save(category_dist_difficulty_null,
     file = '../models_E1/category_dist_difficulty_null.RData')

Load model:

load('../models_E1/category_dist_difficulty_mdl.RData')
load('../models_E1/category_dist_difficulty_null.RData')

Show priors:

prior_summary(category_dist_difficulty_mdl)
##                 prior     class                                  coef
##                (flat)         b                                      
##                (flat)         b                      categoryabstract
##                (flat)         b categoryabstract:other_contribution_c
##                (flat)         b                          difficulty_c
##                (flat)         b                  other_contribution_c
##                (flat)         b                                      
##                (flat)         b                      categoryabstract
##  student_t(3, 0, 2.5) Intercept                                      
##  student_t(3, 0, 2.5) Intercept                                      
##  lkj_corr_cholesky(1)         L                                      
##  lkj_corr_cholesky(1)         L                                      
##  lkj_corr_cholesky(1)         L                                      
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                      categoryabstract
##  student_t(3, 0, 2.5)        sd categoryabstract:other_contribution_c
##  student_t(3, 0, 2.5)        sd                             Intercept
##  student_t(3, 0, 2.5)        sd                  other_contribution_c
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                             Intercept
##  student_t(3, 0, 2.5)        sd                  other_contribution_c
##        group resp dpar nlpar lb ub       source
##                                         default
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                    phi                  default
##                    phi             (vectorized)
##                                         default
##                    phi                  default
##                                         default
##  participant                       (vectorized)
##         word                       (vectorized)
##                               0         default
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##         word                  0    (vectorized)
##         word                  0    (vectorized)
##         word                  0    (vectorized)
prior_summary(category_dist_difficulty_null)
##                 prior     class                                  coef
##                (flat)         b                                      
##                (flat)         b                      categoryabstract
##                (flat)         b                          difficulty_c
##                (flat)         b                  other_contribution_c
##                (flat)         b                                      
##                (flat)         b                      categoryabstract
##  student_t(3, 0, 2.5) Intercept                                      
##  student_t(3, 0, 2.5) Intercept                                      
##  lkj_corr_cholesky(1)         L                                      
##  lkj_corr_cholesky(1)         L                                      
##  lkj_corr_cholesky(1)         L                                      
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                      categoryabstract
##  student_t(3, 0, 2.5)        sd categoryabstract:other_contribution_c
##  student_t(3, 0, 2.5)        sd                             Intercept
##  student_t(3, 0, 2.5)        sd                  other_contribution_c
##  student_t(3, 0, 2.5)        sd                                      
##  student_t(3, 0, 2.5)        sd                             Intercept
##  student_t(3, 0, 2.5)        sd                  other_contribution_c
##        group resp dpar nlpar lb ub       source
##                                         default
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                    phi                  default
##                    phi             (vectorized)
##                                         default
##                    phi                  default
##                                         default
##  participant                       (vectorized)
##         word                       (vectorized)
##                               0         default
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##         word                  0    (vectorized)
##         word                  0    (vectorized)
##         word                  0    (vectorized)

Bayes factors for both:

# Compute:

dist_bf <- bayes_factor(category_dist_difficulty_mdl, category_dist_difficulty_null)

# Save:

save(dist_bf,
     file = '../models_E1/dist_bf.RData')

Show Bayes factor:

# Load:

load('../models_E1/dist_bf.RData')

# Show:

dist_bf
## Estimated Bayes factor in favor of category_dist_difficulty_mdl over category_dist_difficulty_null: 0.12264

Perform posterior predictive checks of the mixed beta regression:

pp_check(category_dist_difficulty_mdl, ndraws = 100)

Check this model:

category_dist_difficulty_mdl
##  Family: beta 
##   Links: mu = logit; phi = log 
## Formula: closeness_01 ~ 1 + category + other_contribution_c + difficulty_c + category:other_contribution_c + (1 + category + other_contribution_c + category:other_contribution_c | participant) + (1 + other_contribution_c | word) 
##          phi ~ 1 + category
##    Data: dist_inter_df (Number of observations: 248) 
##   Draws: 4 chains, each with iter = 6000; warmup = 3000; thin = 1;
##          total post-warmup draws = 12000
## 
## Group-Level Effects: 
## ~participant (Number of levels: 62) 
##                                                                 Estimate
## sd(Intercept)                                                       0.38
## sd(categoryabstract)                                                0.53
## sd(other_contribution_c)                                            0.02
## sd(categoryabstract:other_contribution_c)                           0.01
## cor(Intercept,categoryabstract)                                     0.07
## cor(Intercept,other_contribution_c)                                -0.04
## cor(categoryabstract,other_contribution_c)                         -0.60
## cor(Intercept,categoryabstract:other_contribution_c)               -0.22
## cor(categoryabstract,categoryabstract:other_contribution_c)        -0.53
## cor(other_contribution_c,categoryabstract:other_contribution_c)     0.07
##                                                                 Est.Error
## sd(Intercept)                                                        0.07
## sd(categoryabstract)                                                 0.09
## sd(other_contribution_c)                                             0.00
## sd(categoryabstract:other_contribution_c)                            0.01
## cor(Intercept,categoryabstract)                                      0.26
## cor(Intercept,other_contribution_c)                                  0.22
## cor(categoryabstract,other_contribution_c)                           0.18
## cor(Intercept,categoryabstract:other_contribution_c)                 0.35
## cor(categoryabstract,categoryabstract:other_contribution_c)          0.31
## cor(other_contribution_c,categoryabstract:other_contribution_c)      0.37
##                                                                 l-95% CI
## sd(Intercept)                                                       0.25
## sd(categoryabstract)                                                0.37
## sd(other_contribution_c)                                            0.01
## sd(categoryabstract:other_contribution_c)                           0.00
## cor(Intercept,categoryabstract)                                    -0.38
## cor(Intercept,other_contribution_c)                                -0.45
## cor(categoryabstract,other_contribution_c)                         -0.89
## cor(Intercept,categoryabstract:other_contribution_c)               -0.82
## cor(categoryabstract,categoryabstract:other_contribution_c)        -0.92
## cor(other_contribution_c,categoryabstract:other_contribution_c)    -0.63
##                                                                 u-95% CI Rhat
## sd(Intercept)                                                       0.53 1.00
## sd(categoryabstract)                                                0.71 1.00
## sd(other_contribution_c)                                            0.03 1.00
## sd(categoryabstract:other_contribution_c)                           0.03 1.00
## cor(Intercept,categoryabstract)                                     0.62 1.00
## cor(Intercept,other_contribution_c)                                 0.39 1.00
## cor(categoryabstract,other_contribution_c)                         -0.17 1.00
## cor(Intercept,categoryabstract:other_contribution_c)                0.55 1.00
## cor(categoryabstract,categoryabstract:other_contribution_c)         0.30 1.00
## cor(other_contribution_c,categoryabstract:other_contribution_c)     0.78 1.00
##                                                                 Bulk_ESS
## sd(Intercept)                                                       2235
## sd(categoryabstract)                                                3507
## sd(other_contribution_c)                                            2517
## sd(categoryabstract:other_contribution_c)                           1913
## cor(Intercept,categoryabstract)                                     1694
## cor(Intercept,other_contribution_c)                                 3546
## cor(categoryabstract,other_contribution_c)                          3326
## cor(Intercept,categoryabstract:other_contribution_c)                3385
## cor(categoryabstract,categoryabstract:other_contribution_c)         5021
## cor(other_contribution_c,categoryabstract:other_contribution_c)     4476
##                                                                 Tail_ESS
## sd(Intercept)                                                       4495
## sd(categoryabstract)                                                6432
## sd(other_contribution_c)                                            4085
## sd(categoryabstract:other_contribution_c)                           3034
## cor(Intercept,categoryabstract)                                     2251
## cor(Intercept,other_contribution_c)                                 5875
## cor(categoryabstract,other_contribution_c)                          5692
## cor(Intercept,categoryabstract:other_contribution_c)                5680
## cor(categoryabstract,categoryabstract:other_contribution_c)         5356
## cor(other_contribution_c,categoryabstract:other_contribution_c)     5740
## 
## ~word (Number of levels: 16) 
##                                     Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept)                           0.07      0.05     0.00     0.18 1.00
## sd(other_contribution_c)                0.00      0.00     0.00     0.01 1.00
## cor(Intercept,other_contribution_c)     0.08      0.58    -0.94     0.96 1.00
##                                     Bulk_ESS Tail_ESS
## sd(Intercept)                           3643     5321
## sd(other_contribution_c)                6703     6416
## cor(Intercept,other_contribution_c)     9227     7388
## 
## Population-Level Effects: 
##                                       Estimate Est.Error l-95% CI u-95% CI Rhat
## Intercept                                 1.41      0.07     1.27     1.56 1.00
## phi_Intercept                             4.08      0.21     3.66     4.47 1.00
## categoryabstract                         -0.05      0.10    -0.25     0.15 1.00
## other_contribution_c                      0.01      0.00     0.00     0.02 1.00
## difficulty_c                             -0.01      0.00    -0.01    -0.00 1.00
## categoryabstract:other_contribution_c     0.01      0.00    -0.00     0.02 1.00
## phi_categoryabstract                     -0.40      0.28    -0.95     0.15 1.00
##                                       Bulk_ESS Tail_ESS
## Intercept                                 6815     8231
## phi_Intercept                             4253     5597
## categoryabstract                          6195     7890
## other_contribution_c                      6273     8187
## difficulty_c                             10055    10097
## categoryabstract:other_contribution_c     6793     8192
## phi_categoryabstract                      5502     7480
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

Perform hypothesis tests on the fixed effects coefficient:

hypothesis(category_dist_difficulty_mdl,
           'categoryabstract > 0')
## Hypothesis Tests for class b:
##               Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (categoryabstract) > 0    -0.05       0.1    -0.22     0.12       0.44
##   Post.Prob Star
## 1      0.31     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(category_dist_difficulty_mdl,
           'difficulty_c < 0')
## Hypothesis Tests for class b:
##           Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
## 1 (difficulty_c) < 0    -0.01         0    -0.01        0      11999         1
##   Star
## 1    *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(category_dist_difficulty_mdl,
           'other_contribution_c > 0')
## Hypothesis Tests for class b:
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (other_contributi... > 0     0.01         0        0     0.01      43.61
##   Post.Prob Star
## 1      0.98    *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(category_dist_difficulty_mdl,
           'categoryabstract:other_contribution_c > 0')
## Hypothesis Tests for class b:
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (categoryabstract... > 0     0.01         0        0     0.02      15.64
##   Post.Prob Star
## 1      0.94     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.

13 Main model with self-orientation

This was not originally planned for E1, but since we decided to use it for E2, we will run self-orientation here as well.

Do the main analysis of IOS with difficulty as a covariate predictor, to control for this.

main_IOS_self_mdl <- brm(IOS ~ 
                                 
                                 # Fixed effects:
                                 
                                 1 +
                                 category +
                                 self_contribution_c +
                                 difficulty_c + # new
                                 category:self_contribution_c +
                                 
                                 # Random effects:
                                 
                                 (1 +
                                    category +
                                    self_contribution_c +
                                    category:self_contribution_c|participant) +
                                 (1 + self_contribution_c|word),
           
                               data = inter_df,
                               family = cumulative,
           
                               # MCMC settings:
                               
                               seed = 42,
                               cores = 4,
                               iter = 6000,
                               warmup = 3000,
                               control = list(adapt_delta = 0.99))

# Save model:

save(main_IOS_self_mdl,
     file = '../models_E1/main_IOS_self_mdl.RData')

Load model:

load('../models_E1/main_IOS_self_mdl.RData')

Show priors:

prior_summary(main_IOS_self_mdl)
##                 prior     class                                 coef
##                (flat)         b                                     
##                (flat)         b                     categoryabstract
##                (flat)         b categoryabstract:self_contribution_c
##                (flat)         b                         difficulty_c
##                (flat)         b                  self_contribution_c
##  student_t(3, 0, 2.5) Intercept                                     
##  student_t(3, 0, 2.5) Intercept                                    1
##  student_t(3, 0, 2.5) Intercept                                    2
##  student_t(3, 0, 2.5) Intercept                                    3
##  student_t(3, 0, 2.5) Intercept                                    4
##  student_t(3, 0, 2.5) Intercept                                    5
##  lkj_corr_cholesky(1)         L                                     
##  lkj_corr_cholesky(1)         L                                     
##  lkj_corr_cholesky(1)         L                                     
##  student_t(3, 0, 2.5)        sd                                     
##  student_t(3, 0, 2.5)        sd                                     
##  student_t(3, 0, 2.5)        sd                     categoryabstract
##  student_t(3, 0, 2.5)        sd categoryabstract:self_contribution_c
##  student_t(3, 0, 2.5)        sd                            Intercept
##  student_t(3, 0, 2.5)        sd                  self_contribution_c
##  student_t(3, 0, 2.5)        sd                                     
##  student_t(3, 0, 2.5)        sd                            Intercept
##  student_t(3, 0, 2.5)        sd                  self_contribution_c
##        group resp dpar nlpar lb ub       source
##                                         default
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                         default
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                         default
##  participant                       (vectorized)
##         word                       (vectorized)
##                               0         default
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##         word                  0    (vectorized)
##         word                  0    (vectorized)
##         word                  0    (vectorized)

Perform posterior predictive checks of the mixed beta regression:

pp_check(main_IOS_self_mdl, ndraws = 100)

Check this model:

main_IOS_self_mdl
##  Family: cumulative 
##   Links: mu = logit; disc = identity 
## Formula: IOS ~ 1 + category + self_contribution_c + difficulty_c + category:self_contribution_c + (1 + category + self_contribution_c + category:self_contribution_c | participant) + (1 + self_contribution_c | word) 
##    Data: inter_df (Number of observations: 248) 
##   Draws: 4 chains, each with iter = 6000; warmup = 3000; thin = 1;
##          total post-warmup draws = 12000
## 
## Group-Level Effects: 
## ~participant (Number of levels: 62) 
##                                                               Estimate
## sd(Intercept)                                                     3.02
## sd(categoryabstract)                                              2.23
## sd(self_contribution_c)                                           0.04
## sd(categoryabstract:self_contribution_c)                          0.04
## cor(Intercept,categoryabstract)                                  -0.18
## cor(Intercept,self_contribution_c)                                0.03
## cor(categoryabstract,self_contribution_c)                         0.18
## cor(Intercept,categoryabstract:self_contribution_c)               0.08
## cor(categoryabstract,categoryabstract:self_contribution_c)       -0.09
## cor(self_contribution_c,categoryabstract:self_contribution_c)    -0.07
##                                                               Est.Error
## sd(Intercept)                                                      0.52
## sd(categoryabstract)                                               0.59
## sd(self_contribution_c)                                            0.03
## sd(categoryabstract:self_contribution_c)                           0.03
## cor(Intercept,categoryabstract)                                    0.21
## cor(Intercept,self_contribution_c)                                 0.34
## cor(categoryabstract,self_contribution_c)                          0.37
## cor(Intercept,categoryabstract:self_contribution_c)                0.41
## cor(categoryabstract,categoryabstract:self_contribution_c)         0.42
## cor(self_contribution_c,categoryabstract:self_contribution_c)      0.44
##                                                               l-95% CI u-95% CI
## sd(Intercept)                                                     2.10     4.10
## sd(categoryabstract)                                              1.06     3.42
## sd(self_contribution_c)                                           0.00     0.10
## sd(categoryabstract:self_contribution_c)                          0.00     0.11
## cor(Intercept,categoryabstract)                                  -0.54     0.28
## cor(Intercept,self_contribution_c)                               -0.66     0.65
## cor(categoryabstract,self_contribution_c)                        -0.61     0.83
## cor(Intercept,categoryabstract:self_contribution_c)              -0.75     0.79
## cor(categoryabstract,categoryabstract:self_contribution_c)       -0.81     0.75
## cor(self_contribution_c,categoryabstract:self_contribution_c)    -0.83     0.78
##                                                               Rhat Bulk_ESS
## sd(Intercept)                                                 1.00     2333
## sd(categoryabstract)                                          1.00     1951
## sd(self_contribution_c)                                       1.00     1495
## sd(categoryabstract:self_contribution_c)                      1.00     2491
## cor(Intercept,categoryabstract)                               1.00     4409
## cor(Intercept,self_contribution_c)                            1.00     9556
## cor(categoryabstract,self_contribution_c)                     1.00     5638
## cor(Intercept,categoryabstract:self_contribution_c)           1.00    10894
## cor(categoryabstract,categoryabstract:self_contribution_c)    1.00     8478
## cor(self_contribution_c,categoryabstract:self_contribution_c) 1.00     6697
##                                                               Tail_ESS
## sd(Intercept)                                                     4290
## sd(categoryabstract)                                              2196
## sd(self_contribution_c)                                           3808
## sd(categoryabstract:self_contribution_c)                          3958
## cor(Intercept,categoryabstract)                                   3826
## cor(Intercept,self_contribution_c)                                6254
## cor(categoryabstract,self_contribution_c)                         6469
## cor(Intercept,categoryabstract:self_contribution_c)               8012
## cor(categoryabstract,categoryabstract:self_contribution_c)        8913
## cor(self_contribution_c,categoryabstract:self_contribution_c)     8331
## 
## ~word (Number of levels: 16) 
##                                    Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept)                          0.49      0.29     0.04     1.13 1.00
## sd(self_contribution_c)                0.02      0.01     0.00     0.05 1.00
## cor(Intercept,self_contribution_c)     0.26      0.54    -0.87     0.98 1.00
##                                    Bulk_ESS Tail_ESS
## sd(Intercept)                          3009     4420
## sd(self_contribution_c)                4906     6743
## cor(Intercept,self_contribution_c)     7927     8060
## 
## Population-Level Effects: 
##                                      Estimate Est.Error l-95% CI u-95% CI Rhat
## Intercept[1]                            -7.35      0.97    -9.39    -5.60 1.00
## Intercept[2]                            -2.22      0.55    -3.35    -1.18 1.00
## Intercept[3]                             0.37      0.51    -0.61     1.39 1.00
## Intercept[4]                             3.10      0.59     2.02     4.37 1.00
## Intercept[5]                             6.92      0.91     5.31     8.89 1.00
## categoryabstract                        -0.07      0.51    -1.11     0.94 1.00
## self_contribution_c                      0.01      0.02    -0.03     0.05 1.00
## difficulty_c                            -0.04      0.01    -0.06    -0.02 1.00
## categoryabstract:self_contribution_c     0.01      0.02    -0.04     0.06 1.00
##                                      Bulk_ESS Tail_ESS
## Intercept[1]                             3649     6811
## Intercept[2]                             4640     6083
## Intercept[3]                             6202     8607
## Intercept[4]                             4216     6884
## Intercept[5]                             3537     7704
## categoryabstract                         8040     7244
## self_contribution_c                      6793     8078
## difficulty_c                             6816     8929
## categoryabstract:self_contribution_c     8062     7724
## 
## Family Specific Parameters: 
##      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## disc     1.00      0.00     1.00     1.00   NA       NA       NA
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

Perform hypothesis tests on the fixed effects coefficient:

hypothesis(main_IOS_self_mdl,
           'categoryabstract < 0')
## Hypothesis Tests for class b:
##               Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (categoryabstract) < 0    -0.07      0.51     -0.9     0.75       1.26
##   Post.Prob Star
## 1      0.56     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(main_IOS_self_mdl,
           'self_contribution_c > 0')
## Hypothesis Tests for class b:
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (self_contributio... > 0     0.01      0.02    -0.02     0.05       3.28
##   Post.Prob Star
## 1      0.77     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(main_IOS_self_mdl,
           'categoryabstract:self_contribution_c  > 0')
## Hypothesis Tests for class b:
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (categoryabstract... > 0     0.01      0.02    -0.03     0.05       2.18
##   Post.Prob Star
## 1      0.69     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.

Re-do the other main model, the one with closeness_01 as dependent variable, this time also with a difficulty_c control covariate:

dist_self_mdl <- brm(bf(closeness_01 ~ 
                                      
                                      # Fixed effects:
                                      
                                      1 +
                                      category +
                                      self_contribution_c +
                                      difficulty_c + # new
                                      category:self_contribution_c +
                                      
                                      # Random effects:
                                      
                                      (1 +
                                         category +
                                         self_contribution_c +
                                         category:self_contribution_c|participant) +
                                      (1 + self_contribution_c|word),
                                      phi ~ 1 + category),
           
                                    data = dist_inter_df, # interactive only
                                    family = Beta,
           
                                    # MCMC settings:
           
                                    seed = 42,
                                    init = 0,
                                    cores = 4,
                                    iter = 6000,
                                    warmup = 3000,
                                    control = list(adapt_delta = 0.99))

# Save model:

save(dist_self_mdl,
     file = '../models_E1/dist_self_mdl.RData')

Load model:

load('../models_E1/dist_self_mdl.RData')

Show priors:

prior_summary(dist_self_mdl)
##                 prior     class                                 coef
##                (flat)         b                                     
##                (flat)         b                     categoryabstract
##                (flat)         b categoryabstract:self_contribution_c
##                (flat)         b                         difficulty_c
##                (flat)         b                  self_contribution_c
##                (flat)         b                                     
##                (flat)         b                     categoryabstract
##  student_t(3, 0, 2.5) Intercept                                     
##  student_t(3, 0, 2.5) Intercept                                     
##  lkj_corr_cholesky(1)         L                                     
##  lkj_corr_cholesky(1)         L                                     
##  lkj_corr_cholesky(1)         L                                     
##  student_t(3, 0, 2.5)        sd                                     
##  student_t(3, 0, 2.5)        sd                                     
##  student_t(3, 0, 2.5)        sd                     categoryabstract
##  student_t(3, 0, 2.5)        sd categoryabstract:self_contribution_c
##  student_t(3, 0, 2.5)        sd                            Intercept
##  student_t(3, 0, 2.5)        sd                  self_contribution_c
##  student_t(3, 0, 2.5)        sd                                     
##  student_t(3, 0, 2.5)        sd                            Intercept
##  student_t(3, 0, 2.5)        sd                  self_contribution_c
##        group resp dpar nlpar lb ub       source
##                                         default
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                                    (vectorized)
##                    phi                  default
##                    phi             (vectorized)
##                                         default
##                    phi                  default
##                                         default
##  participant                       (vectorized)
##         word                       (vectorized)
##                               0         default
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##  participant                  0    (vectorized)
##         word                  0    (vectorized)
##         word                  0    (vectorized)
##         word                  0    (vectorized)

Perform posterior predictive checks of the mixed beta regression:

pp_check(dist_self_mdl, ndraws = 100)

Check this model:

dist_self_mdl
##  Family: beta 
##   Links: mu = logit; phi = log 
## Formula: closeness_01 ~ 1 + category + self_contribution_c + difficulty_c + category:self_contribution_c + (1 + category + self_contribution_c + category:self_contribution_c | participant) + (1 + self_contribution_c | word) 
##          phi ~ 1 + category
##    Data: dist_inter_df (Number of observations: 248) 
##   Draws: 4 chains, each with iter = 6000; warmup = 3000; thin = 1;
##          total post-warmup draws = 12000
## 
## Group-Level Effects: 
## ~participant (Number of levels: 62) 
##                                                               Estimate
## sd(Intercept)                                                     0.46
## sd(categoryabstract)                                              0.41
## sd(self_contribution_c)                                           0.02
## sd(categoryabstract:self_contribution_c)                          0.02
## cor(Intercept,categoryabstract)                                  -0.02
## cor(Intercept,self_contribution_c)                               -0.01
## cor(categoryabstract,self_contribution_c)                        -0.43
## cor(Intercept,categoryabstract:self_contribution_c)              -0.44
## cor(categoryabstract,categoryabstract:self_contribution_c)       -0.11
## cor(self_contribution_c,categoryabstract:self_contribution_c)    -0.33
##                                                               Est.Error
## sd(Intercept)                                                      0.07
## sd(categoryabstract)                                               0.11
## sd(self_contribution_c)                                            0.00
## sd(categoryabstract:self_contribution_c)                           0.01
## cor(Intercept,categoryabstract)                                    0.26
## cor(Intercept,self_contribution_c)                                 0.24
## cor(categoryabstract,self_contribution_c)                          0.26
## cor(Intercept,categoryabstract:self_contribution_c)                0.29
## cor(categoryabstract,categoryabstract:self_contribution_c)         0.36
## cor(self_contribution_c,categoryabstract:self_contribution_c)      0.33
##                                                               l-95% CI u-95% CI
## sd(Intercept)                                                     0.33     0.60
## sd(categoryabstract)                                              0.19     0.62
## sd(self_contribution_c)                                           0.01     0.02
## sd(categoryabstract:self_contribution_c)                          0.00     0.03
## cor(Intercept,categoryabstract)                                  -0.46     0.53
## cor(Intercept,self_contribution_c)                               -0.49     0.46
## cor(categoryabstract,self_contribution_c)                        -0.85     0.17
## cor(Intercept,categoryabstract:self_contribution_c)              -0.88     0.22
## cor(categoryabstract,categoryabstract:self_contribution_c)       -0.74     0.62
## cor(self_contribution_c,categoryabstract:self_contribution_c)    -0.81     0.51
##                                                               Rhat Bulk_ESS
## sd(Intercept)                                                 1.00     2825
## sd(categoryabstract)                                          1.00     1625
## sd(self_contribution_c)                                       1.00     1097
## sd(categoryabstract:self_contribution_c)                      1.00     1602
## cor(Intercept,categoryabstract)                               1.00     2208
## cor(Intercept,self_contribution_c)                            1.00     3503
## cor(categoryabstract,self_contribution_c)                     1.00     1942
## cor(Intercept,categoryabstract:self_contribution_c)           1.00     4131
## cor(categoryabstract,categoryabstract:self_contribution_c)    1.00     2313
## cor(self_contribution_c,categoryabstract:self_contribution_c) 1.00     3006
##                                                               Tail_ESS
## sd(Intercept)                                                     4855
## sd(categoryabstract)                                              2982
## sd(self_contribution_c)                                            798
## sd(categoryabstract:self_contribution_c)                          2022
## cor(Intercept,categoryabstract)                                   3982
## cor(Intercept,self_contribution_c)                                5592
## cor(categoryabstract,self_contribution_c)                         3299
## cor(Intercept,categoryabstract:self_contribution_c)               5308
## cor(categoryabstract,categoryabstract:self_contribution_c)        4384
## cor(self_contribution_c,categoryabstract:self_contribution_c)     3706
## 
## ~word (Number of levels: 16) 
##                                    Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept)                          0.10      0.05     0.01     0.20 1.00
## sd(self_contribution_c)                0.00      0.00     0.00     0.01 1.00
## cor(Intercept,self_contribution_c)     0.47      0.46    -0.71     0.99 1.00
##                                    Bulk_ESS Tail_ESS
## sd(Intercept)                          3231     3610
## sd(self_contribution_c)                3544     3666
## cor(Intercept,self_contribution_c)     5130     6193
## 
## Population-Level Effects: 
##                                      Estimate Est.Error l-95% CI u-95% CI Rhat
## Intercept                                1.43      0.08     1.27     1.59 1.00
## phi_Intercept                            4.25      0.25     3.75     4.73 1.00
## categoryabstract                        -0.01      0.10    -0.21     0.19 1.00
## self_contribution_c                      0.01      0.00    -0.00     0.01 1.00
## difficulty_c                            -0.01      0.00    -0.01    -0.00 1.00
## categoryabstract:self_contribution_c     0.00      0.01    -0.01     0.01 1.00
## phi_categoryabstract                    -0.79      0.32    -1.41    -0.15 1.00
##                                      Bulk_ESS Tail_ESS
## Intercept                                4429     6956
## phi_Intercept                            2443     5186
## categoryabstract                         5612     6831
## self_contribution_c                      5350     7487
## difficulty_c                             8738    10205
## categoryabstract:self_contribution_c     6092     7103
## phi_categoryabstract                     3249     6678
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

Perform hypothesis tests on the fixed effects coefficient:

hypothesis(dist_self_mdl,
           'categoryabstract > 0')
## Hypothesis Tests for class b:
##               Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (categoryabstract) > 0    -0.01       0.1    -0.18     0.15       0.82
##   Post.Prob Star
## 1      0.45     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(dist_self_mdl,
           'difficulty_c < 0')
## Hypothesis Tests for class b:
##           Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
## 1 (difficulty_c) < 0    -0.01         0    -0.01        0        Inf         1
##   Star
## 1    *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(dist_self_mdl,
           'self_contribution_c > 0')
## Hypothesis Tests for class b:
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (self_contributio... > 0     0.01         0        0     0.01      20.86
##   Post.Prob Star
## 1      0.95    *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
hypothesis(dist_self_mdl,
           'categoryabstract:self_contribution_c > 0')
## Hypothesis Tests for class b:
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (categoryabstract... > 0        0      0.01    -0.01     0.01        1.2
##   Post.Prob Star
## 1      0.55     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.

This completes this script.